chinese直男口爆体育生外卖, 99久久er热在这里只有精品99, 又色又爽又黄18禁美女裸身无遮挡, gogogo高清免费观看日本电视,私密按摩师高清版在线,人妻视频毛茸茸,91论坛 兴趣闲谈,欧美 亚洲 精品 8区,国产精品久久久久精品免费

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

簡述開放域長格式問答系統(tǒng)的進(jìn)步與挑戰(zhàn)

Tensorflowers ? 來源:Google Research 研究員 Aurko ? 作者:Google Research 研究員 ? 2021-05-31 10:02 ? 次閱讀
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

發(fā)布人:Google Research 研究員 Aurko Roy

開放域長格式問答 (LFQA) 是自然語言處理 (NLP) 的一項(xiàng)基礎(chǔ)挑戰(zhàn),涉及檢索與給定問題相關(guān)的文檔,并使用這些文檔來生成一段詳盡答案。在事實(shí)型開放域問答 (QA) 中,簡單的短語或?qū)嶓w便足以回答問題。雖然我們近期在這一方面取得了顯著進(jìn)展,但在長格式問答領(lǐng)域中卻做得遠(yuǎn)遠(yuǎn)不夠。盡管如此,LFQA 仍是一項(xiàng)非常重要的任務(wù),特別是它能提供一個(gè)測試平臺來衡量生成文本模型的真實(shí)性。但是,當(dāng)前的基準(zhǔn)和評估指標(biāo)真的能在 LFQA 方面取得進(jìn)展嗎?

在“在長格式問答領(lǐng)域取得進(jìn)展的障礙”(Hurdles to Progress in Long-form Question Answering)(將在 NAACL 2021 會議上發(fā)表)中,我們介紹了一種新的開放域長格式問答系統(tǒng),它利用了 NLP 的兩項(xiàng)最新進(jìn)展:

1.最先進(jìn)的稀疏注意力模型(例如 Routing Transformer(RT)),能夠?qū)⒒谧⒁饬Φ哪P蛿U(kuò)展至長序列;

2.基于檢索的模型(例如 REALM),有助于檢索與給定查詢相關(guān)的維基百科文章。

Routing Transformer

https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00353

為獲得更多的事實(shí)依據(jù),對于檢索到的與給定問題相關(guān)的一些維基百科文章,我們的系統(tǒng)會在答案生成之前將從中獲得的信息結(jié)合起來 ELI5 是唯一一個(gè)可用于長格式問答的大規(guī)模公開數(shù)據(jù)集,我們的系統(tǒng)在該數(shù)據(jù)集上取得了突破性進(jìn)展。

ELI5

https://ai.facebook.com/blog/longform-qa/

不過,雖然這個(gè)系統(tǒng)在公開排行榜上名列前茅,但我們發(fā)現(xiàn) ELI5 數(shù)據(jù)集及其相關(guān)評估指標(biāo)的一些趨勢令人擔(dān)憂。特別要強(qiáng)調(diào)的是,我們發(fā)現(xiàn) 1) 幾乎沒有證據(jù)表明模型實(shí)際使用了它們所要求的檢索;2) 平凡基線(例如輸入復(fù)制)擊敗了現(xiàn)代系統(tǒng),如 RAG/BART+DPR;以及 3) 數(shù)據(jù)集中存在大量訓(xùn)練/驗(yàn)證重疊。我們的論文針對每一個(gè)問題提出了緩解策略。

輸入復(fù)制

https://eval.ai/web/challenges/challenge-page/689/leaderboard/1908#leaderboardrank-6

文本生成

NLP 模型的核心要件是 Transformer 架構(gòu),其序列中的每個(gè) Token 都會關(guān)注序列中的其他所有 Toekn,從而形成一個(gè)隨序列長度呈二次增長的模型。RT 模型引入了一種基于內(nèi)容的動(dòng)態(tài)稀疏注意力機(jī)制,將 Transformer 模型中的注意力復(fù)雜度從 n2 降到了 n1.5( 其中 n 是序列長度),使其能夠擴(kuò)展到長序列。這使得每個(gè)單詞都可以關(guān)注整個(gè)文本中 任何地方的其他相關(guān)單詞, 而不像 Transformer XL 等類似方法,一個(gè)單詞只能關(guān)注其附近的單詞。

RT 發(fā)揮作用的關(guān)鍵在于每個(gè) Token 對其他 Token 的關(guān)注通常是冗余的,并且可以通過結(jié)合局部和全局注意力進(jìn)行估算。局部注意力允許每個(gè) Token 在模型的幾個(gè)層上建立一個(gè)局部表征,其中每個(gè) Token 關(guān)注一個(gè)局部鄰域,從而達(dá)到局部的一致性和流暢性。作為對局部注意力的補(bǔ)充,RT 模型還使用了小批量 k-均值集群, 使每個(gè) Token 只關(guān)注一組最相關(guān)的 Token 。

我們以語言建模為目標(biāo),使用 ProjectGutenberg(PG-19) 數(shù)據(jù)集預(yù)先訓(xùn)練了一個(gè) RT 模型,即在給定前面所有單詞的情況下,讓該模型學(xué)會預(yù)測下一個(gè)單詞,從而能夠生成流利的段落長文本。

ProjectGutenberg(PG-19)

https://deepmind.com/blog/article/A_new_model_and_dataset_for_long-range_memory

信息檢索

為了證明 RT 模型在 LFQA 任務(wù)中的有效性,我們將其與 REALM 中檢索到的內(nèi)容結(jié)合使用。REALM 模型(Guu 等人于 2020 年發(fā)布)是基于檢索的模型,使用最大內(nèi)積搜索來檢索與特定查詢或問題相關(guān)的維基百科文章。我們對該模型進(jìn)行了微調(diào),以便根據(jù)自然問題數(shù)據(jù)集作出事實(shí)型問答。REALM 利用 BERT 模型學(xué)習(xí)問題的良好表征,并使用 SCANN 檢索與問題表征具有高度主題相似性的維基百科文章。接著進(jìn)行端到端訓(xùn)練,以最大程度地提高 QA 任務(wù)的對數(shù)似然值。

通過使用對比損失,我們進(jìn)一步提高了 REALM 檢索的質(zhì)量。其背后的想法是讓問題表征更靠近其基本事實(shí)答案,并與其他小批量答案有所不同。這樣可以確保,當(dāng)系統(tǒng)使用此問題表征檢索相關(guān)項(xiàng)目時(shí),會返回與基本事實(shí)答案“類似”的文章。我們稱這種檢索器為對比型-REALM 或 c-REALM。

對比損失

https://towardsdatascience.com/contrastive-loss-explaned-159f2d4a87ec

評估

我們使用 ELI5 數(shù)據(jù)集對該模型進(jìn)行了長格式問答測試。ELI5 數(shù)據(jù)集是 KILT 基準(zhǔn)的一部分, 也是唯一 公開可用的大規(guī)模 LFQA 數(shù)據(jù)集。KILT 基準(zhǔn)使用準(zhǔn)確率 (R-Prec) 衡量文本檢索質(zhì)量,并使用 ROUGE-L 衡量文本生成質(zhì)量。將這兩項(xiàng)得分相加即可得出一個(gè) KILT R-L 得分,該得分決定了模型在排行榜上的排名。在 KILT 的 ELI5 數(shù)據(jù)集上,我們對預(yù)先訓(xùn)練好的 RT 模型和 c-REALM 檢索進(jìn)行了微調(diào)。

我們提交的內(nèi)容在 ELI5 長格式問答 KILT 排行榜上名列前茅,綜合 KILT R-L 得分為 2.36。對比先前 BART + DPR 排行榜的排名(KILT R-L 得分為 1.9)有了提升,同時(shí)具有與排行榜上其他模型相似的參數(shù)數(shù)量。在文本生成質(zhì)量方面,我們發(fā)現(xiàn) Rouge-L 與 T5、BART + DPR 和 RAG 相比,得分分別提高了 4.11、5.78 和 9.14。

排行榜

https://eval.ai/web/challenges/challenge-page/689/leaderboard/1908

RT 模型生成的示例

Q1

Why are almost all boats white?

Boats are generally painted white for obvious reasons, as they are inexpensive and easy to clean. Ships are also often painted to provide a visible visual contrast to the sea. Boats also often have high visibility ports to allow for easy international trade.

In addition, many boats have an air-breathing function that makes them easier to spot and stay afloat, which is a good thing if you're a pirate or trying to get into some country's waters. Lastly, many ships are built in the last few decades, so they are not as exotic looking as they were thirty years ago.

And lastly, many ships are made out of materials that are easy to work with, and so many of them are painted white as a practical matter. There are many, many more reasons for why ships are white, but those are the ones I know of. I'm sure there are many other reasons, but those are the ones I know of.

Edit: I forgot to mention that the other answers about visibility and easy to see ports are also true. Ships are painted to appear to be larger, and more expensive. That makes them appear more impressive, and so they are more likely to be bought by people who want to show off their wealth and status.

Q2

What causes the trail behind jets at high altitude?

The same thing that causes the clouds you see at the start of a storm to form in the first place. The air pressure changes as the jet moves through the air, causing it to cool down and condense into water droplets. These droplets are in fact a lot bigger than the water droplets in the clouds, so when the plane passes through them, they can cause the air pressure to change, causing the water droplets to condense into clouds.

This is what you see when you look at a cloud formation from the ground. In addition to this, when the plane passes through the clouds, it also moves through the air, causing it to change its temperature, causing the water droplets to condense and form clouds.

These clouds are often quite large, and can block out the sun. This is what you see when you look at a contrail. Finally, the water vapor that forms at high altitudes is cooled by the air above, and as it cools, it condenses into clouds. This is what you see when you look at a cumulonimbus cloud. These clouds are even larger than the water droplets in the clouds, and they block out the sun.

Q3

How do shampoo and conditioner work?

The key ingredients in shampoo and conditioner are usually surfactants, they tend to attract oil from your skin. Shampoo will usually contain an anti-dandruff agent (dandruff is caused by fungi in your skin which cause dandruff). They also contain some moisturizing agents (moisturizers are things that stay in your skin longer and protect your scalp from cleaning),

And a lot of anti-dandruff (dandruff is a side effect of high doses of both surfactants and moisturizers) Conditioner will contain moisturizing agents, like a skin moisturizer. They tend to cause oils to be separated from your hair (e.g. wash your hair with conditioner instead of shampoo). They both have a foaming agent that makes them foam up (usually Hydrogen Peroxide). They also have a foaming agent that stops dirt from attaching to your hair. Some conditioner will also suppress the action of the shampoo (to help prevent dirt from attaching to your hair).

Q4

When glancing at a clock, why does the first second after glancing at it sometimes feel longer than the rest?

This is known as chronostasis. Your brain gets used to seeing the second hand move after the first, so it ignores the first one. If you were to look at the clock a lot, you'd see the second hand move a lot more than the first, and you'd be able to tell the difference between the second and the first.

The second hand will appear to move much slower than the first hand because your brain is used to seeing the second hand move a lot more than the first. If you were to go on a long trip, you'd get used to the second hand moving a lot more than the first, and you'd have no problem telling the difference between the second and the first hand.

It's kind of like how you don't notice your nose when you're driving, but if you look at it for a while, you start to notice it. Your brain is just getting used to seeing the nose move a lot more than the other, and it can't tell the difference between the two. It's kind of like that with your eyes.

Q5

Why do Britain and other English empire countries still bow to monarchs? What real purpose does the queen serve?

Depends what you define as "purpose". There is no specific purpose for the British monarch - rather, they are figureheads that serve a role. The role is to have a high degree of influence over a country, though the rules of the monarchy can somewhat vary from country to country, and the ceremony of "kissing the royal corpse" does not always involve kissing the monarch. Whether that's the only reason for the custom, or if it was the only reason,

I can't say, but that's the reasoning. When the U.K. was in full power, the monarch was a puppet of Parliament, and the powers of the monarch were transferred to the Prime Minister and thus to the Prime Minister's deputy, who then became the Prime Minister. Since then, Parliament has been able to vote on legislation that goes through the monarch,

although they may still act as the monarch's representative in negotiating treaties, which can have very very deep consequences. The Queen's role, as a representative of Britain, doesn't necessarily involve her formal approval of any of the laws or legislation that goes through Parliament, though.

Q6

What exactly is fire, in detail? How can light and heat come from something we can't really touch?

Fire is the chemical reaction of fuel (oxygen) with (what we call) impurities (ash, soot, oil etc). These impurities are created due to a number of things. If you were to study the chemical composition of a log (or a candle) and analyse it you would discover the chemical formula (A, C, and G) which is a simplified version of what actually happens. There is always fuel, and there is always the ash/soot. Wood is made up of mostly carbon and hydrogen and these are the fuels that create the fire. Hydrogen and oxygen are reactive, and they form water (H2O) and CO2 (CO2). If we split a log into smaller pieces, we start to see the process of combustion as it occurs. The carbon is burned, and with the explosion of CO, H2O and H2CO2, we get water and carbon dioxide (mostly). If we smash a log together, we start to see what happens when we also hit another log with another log...imagine smashing two sand castles together. We see how the sand castles and brick castles form and collapse at different rates...then it starts to burn. The smoke, ash and flames are the actual combustion of fuel.

Q7

What goes on in those tall tower buildings owned by major banks?

The actual buildings are not there for profit but for show. There are many real world historical buildings that have little to no effect other than being a symbol of an economic entity or symbol of an era or location, and nothing more. For example look at Sears, Sears didn't care what went on inside, it was all about the \_appearance\_ of its location, the prestige of the location, the facilities and so on. It didn't care about how long it took it to operate, it was about how much people would pay to go see it. Sears was a landmark as a cultural movement and other big companies followed suit, so if you want to see a building you've never seen before, you have to go see Sears, just like you have to see a Toyota Camry for Toyota Camry. They used to be all about building new factories, some of them if I recall, but now that they're bigger, that means that more factory jobs are coming to them. You've probably seen them in stores as stores where people buy and sell stuff, so there aren't that many places for them to come from. Instead, it's just for show, a symbol of rich people.

在 LFQA 領(lǐng)域取得進(jìn)展的障礙

不過,雖然此處介紹的 RT 系統(tǒng)在公開排行榜中名列前茅,但對模型和 ELI5 數(shù)據(jù)庫的詳細(xì)分析仍揭示了一些令人擔(dān)憂的趨勢。

Train/Valid Overlap

Many held-out questions are paraphrased in the training set. Best answer to similar train questions gets 27.4 ROUGE-L.

Lack of Grounding

Conditioning answer generation on random documents instead of relevant ones does not measurably impact its factual correctness. Longer outputs get higher ROUGE-L.

我們發(fā)現(xiàn),幾乎沒有任何證據(jù)表明模型會將其文本生成實(shí)際定位到檢索文檔中。與 Wikipedia 中的隨機(jī)檢索搭配使用的微調(diào) RT 模型(例如,隨機(jī)檢索 + RT),幾乎與 c-REALM + RT 模型(24.2 與 24.4 ROUGE-L)表現(xiàn)得一樣好。在訓(xùn)練、驗(yàn)證和測試 ELI5 數(shù)據(jù)集時(shí),我們還發(fā)現(xiàn)了很多的重疊(幾個(gè)問題相互解釋),因此可能不再需要檢索。KILT 基準(zhǔn)會單獨(dú)衡量檢索和生成的質(zhì)量,但不確定文本生成是否會在實(shí)際情況中使用檢索。

與 RAG 和 BART + DPR 相比,平凡基線會獲得更高的 Rouge-L 分?jǐn)?shù)

此外,在使用 Rouge-L 指標(biāo)和平凡無意義基線(如隨機(jī)訓(xùn)練集答案和輸入復(fù)制)來評估文本生成質(zhì)量的過程中,我們發(fā)現(xiàn)了一些問題,并導(dǎo)致 Rouge-L 分?jǐn)?shù)相對較高(甚至超過了 BART + DPR 和 RAG)。

結(jié)論

我們?yōu)榛?Routing Transformers 和 REALM 的長格式問答推出了一個(gè)系統(tǒng),該系統(tǒng)在關(guān)于 ELI5 的 KILT 排行榜中名列前茅。但是,詳細(xì)的分析揭示了存在的一些問題,即無法使用基準(zhǔn)來顯示有意義的建模進(jìn)展。我們希望社區(qū)共同合作,一起解決這些問題,以便研究人員向正確的高峰攀登,在這個(gè)充滿挑戰(zhàn)但十分重要的任務(wù)中取得有意義的進(jìn)展。

致謝

Routing Transformer 是 Aurko Roy、Mohammad Saffar、Ashish Vaswani 和 David Grangier 等人進(jìn)行團(tuán)隊(duì)協(xié)作的結(jié)果。有關(guān)開放域長格式問答的后續(xù)工作是由 Kalpesh Krishna、Aurko Roy 和 Mohit Iyyer 協(xié)作完成的。我們要感謝 Vidhisha Balachandran、Niki Parmar 和 Ashish Vaswani 提供的多條實(shí)用意見,感謝 REALM 團(tuán)隊(duì) (Kenton Lee、Kelvin Guu、Ming-Wei Chang 和 Zora Tung) 在代碼庫方面提供的幫助以及多條實(shí)用意見,這些意見幫助我們進(jìn)一步完善了實(shí)驗(yàn)。

我們非常感謝 Tu Vu 針對 QQP 分類器提供的幫助,這些分類器用于在 ELI5 訓(xùn)練集和測試集中檢測解釋。感謝 Jules Gagnon-Marchand 和 Sewon Min 對檢查 ROUGE-L 邊界提供的有用實(shí)驗(yàn)建議。最后,感謝 Shufan Wang、Andrew Drozdov、Nader Akoury 以及 UMass NLP 小組的其他成員針對項(xiàng)目的不同階段提出的實(shí)用意見和建議。

編輯:jq

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點(diǎn)僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請聯(lián)系本站處理。 舉報(bào)投訴
  • 數(shù)據(jù)集
    +關(guān)注

    關(guān)注

    4

    文章

    1236

    瀏覽量

    26201
  • nlp
    nlp
    +關(guān)注

    關(guān)注

    1

    文章

    491

    瀏覽量

    23280

原文標(biāo)題:開放域長格式問答系統(tǒng)的進(jìn)步與挑戰(zhàn)

文章出處:【微信號:tensorflowers,微信公眾號:Tensorflowers】歡迎添加關(guān)注!文章轉(zhuǎn)載請注明出處。

收藏 人收藏
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

    評論

    相關(guān)推薦
    熱點(diǎn)推薦

    京東方發(fā)布顯示行業(yè)首款全員開放AI問答系統(tǒng)

    2026年2月24日,BOE(京東方)正式發(fā)布基于京東方藍(lán)鯨顯示大模型的BOE AI 問答系統(tǒng),顯示行業(yè)內(nèi)率先面向企業(yè)全員開放應(yīng)用,以多項(xiàng)技術(shù)領(lǐng)先優(yōu)勢,樹立企業(yè)級AI創(chuàng)新應(yīng)用新標(biāo)桿,標(biāo)志著京東方以AI賦能開啟了智能驅(qū)動(dòng)運(yùn)營的新范
    的頭像 發(fā)表于 03-02 09:42 ?282次閱讀

    小藝開放平臺平臺功能

    選擇插件、工作流,響應(yīng)用戶需求。LLM模式適用于簡單對話、知識問答、基礎(chǔ)內(nèi)容生成等場景。 工作流模式 工作流模式是一種基于規(guī)則化流程的智能體編排方式。開發(fā)者將復(fù)雜任務(wù)拆解為有序的規(guī)則化步驟(如數(shù)據(jù)獲取
    發(fā)表于 01-30 15:24

    航天宏圖與藍(lán)星光簽署戰(zhàn)略合作協(xié)議

    1月16日,航天宏圖與藍(lán)星光在安徽省宿州市簽署戰(zhàn)略合作協(xié)議。協(xié)議由航天宏圖董事王宇翔與藍(lán)星光創(chuàng)始人兼CEO閆志欣代表雙方共同簽署。
    的頭像 發(fā)表于 01-26 14:06 ?427次閱讀

    什么是位

    數(shù)據(jù)結(jié)構(gòu),稱為“位”或“位段”。 所謂“位”是把一個(gè)字節(jié)中的二進(jìn)位劃分為幾個(gè)不同的區(qū)域,并說明每個(gè)區(qū)域的位數(shù)。 每個(gè)有一個(gè)域名,允許在程序中按域名進(jìn)行操作。這樣就可以把幾個(gè)不同的對象用一個(gè)字
    發(fā)表于 12-15 08:07

    C語言位反轉(zhuǎn)的應(yīng)用

    反轉(zhuǎn)(Bit Reversal) unsigned int reverse_bits(unsigned int num) { unsigned int numOfBits = sizeof
    發(fā)表于 12-12 06:42

    時(shí)儲能元器件面臨三大挑戰(zhàn) 行業(yè)技術(shù)方案逐步落地

    電子發(fā)燒友網(wǎng)報(bào)道(文/黃山明)當(dāng)前,在“雙碳”目標(biāo)引領(lǐng)下,新能源發(fā)電占比持續(xù)提升,但風(fēng)能、光伏的間歇性與波動(dòng)性對電網(wǎng)穩(wěn)定運(yùn)行構(gòu)成嚴(yán)峻挑戰(zhàn)。時(shí)儲能(通常指放電時(shí)長4小時(shí)以上的儲能系統(tǒng))作為解決新能源
    的頭像 發(fā)表于 12-07 08:35 ?1.1w次閱讀

    解讀

    、RAM、ROM、I/O、中斷系統(tǒng)、定時(shí)器/計(jì)數(shù)器等功能(有的還包括顯示驅(qū)動(dòng)電路、脈寬調(diào)制電路、模擬多路轉(zhuǎn)換器、A/D轉(zhuǎn)換器等電路)集成到一塊硅片上構(gòu)成的一個(gè)小而完善的微型計(jì)算機(jī)系統(tǒng),在工控領(lǐng)域
    發(fā)表于 12-05 06:45

    HarmonyOSAI編程智能問答

    CodeGenie基于生成式搜索能力,通過查詢生成、內(nèi)容優(yōu)選服務(wù)高效理解用戶意圖,問答交互式地獲取編碼相關(guān)知識。 對話示例 在對話區(qū)域輸入需要查詢的問題,開始問答。示例如下: ArkTS如何實(shí)現(xiàn)
    發(fā)表于 09-03 16:17

    技術(shù)貼|【RK3588】ELF 2開發(fā)板如何添加exFAT和NTFS文件系統(tǒng)格式

    基于RK3588設(shè)計(jì)的ELF2開發(fā)板在搭載Desktop22.04系統(tǒng)時(shí),對TF卡的文件系統(tǒng)支持存在以下限制:不支持exFAT格式;支持NTFS格式,但需手動(dòng)掛載;針對上述兼容性問題,
    的頭像 發(fā)表于 08-27 17:21 ?3539次閱讀
    技術(shù)貼|【RK3588】ELF 2開發(fā)板如何添加exFAT和NTFS文件<b class='flag-5'>系統(tǒng)</b><b class='flag-5'>格式</b>

    HarmonyOS AI輔助編程工具(CodeGenie)智能問答

    CodeGenie基于生成式搜索能力,在對話區(qū)域下拉框中選擇所需要的智能體,通過查詢生成、內(nèi)容優(yōu)選服務(wù)高效理解用戶意圖,問答交互式地獲取編碼相關(guān)知識。 CodeGenie現(xiàn)接入小藝
    發(fā)表于 08-15 11:07

    2025電賽題目問答(已更新)

    2025電賽題目問答(已更新)
    的頭像 發(fā)表于 07-30 12:59 ?5158次閱讀
    2025電賽題目<b class='flag-5'>問答</b>(已更新)

    BMS HIL測試技術(shù)演進(jìn):高壓架構(gòu)、多融合與儲能系統(tǒng)應(yīng)用解析

    BMS通信適配、國標(biāo)充電協(xié)議迭代、多融合協(xié)同測試等關(guān)鍵技術(shù)挑戰(zhàn),并拓展至儲能BMS主動(dòng)均衡與多系統(tǒng)仿真需求,助力行業(yè)應(yīng)對能源變革。
    的頭像 發(fā)表于 05-19 14:56 ?1933次閱讀
    BMS HIL測試技術(shù)演進(jìn):高壓架構(gòu)、多<b class='flag-5'>域</b>融合與儲能<b class='flag-5'>系統(tǒng)</b>應(yīng)用解析

    電科技發(fā)布2024年度ESG報(bào)告:創(chuàng)新驅(qū)動(dòng)綠色發(fā)展,共建開放協(xié)同生態(tài)

    2025年4月20日,電科技(600584.SH)正式發(fā)布《2024年度環(huán)境、社會及治理(ESG)報(bào)告》,系統(tǒng)展示公司在ESG戰(zhàn)略引領(lǐng)下的全面布局與突破實(shí)踐。 作為全球領(lǐng)先的集成電路成品制造企業(yè)
    的頭像 發(fā)表于 04-21 14:11 ?1126次閱讀

    工業(yè)控制系統(tǒng)中的信號傳輸格式解析

    在工業(yè)控制系統(tǒng)中,各類傳感器、執(zhí)行器和控制器之間的信號傳輸是實(shí)現(xiàn)自動(dòng)化生產(chǎn)的核心環(huán)節(jié)。這些信號的格式與傳輸方式直接影響著系統(tǒng)的穩(wěn)定性、精度和抗干擾能力。本文將從工業(yè)信號的分類、常見格式
    的頭像 發(fā)表于 03-19 17:29 ?1323次閱讀

    中科曙光助力中航結(jié)算公司構(gòu)建私文檔智能問答系統(tǒng)

    近日,中航結(jié)算公司(中國航空結(jié)算有限責(zé)任公司)依托曙光AI解決方案提供的強(qiáng)勁算力,協(xié)同DeepAI深算智能引擎快速適配、調(diào)優(yōu)、上線DeepSeek大模型業(yè)務(wù)平臺?;贒eepSeek底座,中航結(jié)算公司構(gòu)建了以RAG增強(qiáng)檢索為核心的私文檔智能問答
    的頭像 發(fā)表于 03-19 15:40 ?1160次閱讀