Stanford University released AI-based model "transparency indicator", Llama 2 tops but "fails" with 54%

巴比特_

According to IT House reported on October 20, Stanford University recently released the AI basic model “transparency indicator”, of which the highest indicator is Meta’s Lama 2, but the relevant “transparency” is only 54%, so researchers believe that almost all AI models on the market “lack transparency”.

It is reported that the research was led by Rishi Bommasani, head of the HAI Basic Model Research Center (CRFM), and investigated the 10 most popular basic models overseas. Rishi Bommasani believes that “lack of transparency” has always been a problem faced by the AI industry, and in terms of specific model “transparency indicators”, IT House found that the relevant evaluation content mainly revolves around “model training dataset copyright”, “computing resources used to train the model”, “credibility of the content generated by the model”, “the ability of the model itself”, “the risk of the model being induced to generate harmful content”, “the privacy of users using the model”, etc., a total of 100 items.

The final survey showed that Meta’s Lama 2 topped with 54% transparency, while OpenAI’s GPT-4 transparency was only 48%, and Google’s PaLM 2 ranked fifth with 40%.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments