Auteur Sujet: Sam Altman démis de son poste de CEO d'OpenAI  (Lu 1194 fois)

0 Membres et 1 Invité sur ce sujet

alain_p

  • Abonné Free fibre
  • *
  • Messages: 16 282
  • Delta S 10G-EPON sur Les Ulis (91)
Sam Altman démis de son poste de CEO d'OpenAI
« Réponse #12 le: 23 novembre 2023 à 21:18:07 »
Reuters a sorti aujourd'hui un article comme quoi le renvoi de Sam Altman de son poste de CEO par le board aurait pu être causé par une lettre de plusieurs scientifiques seniors d'OpenAI les avertissant d'un projet appelé Q*, qui serait arrivé à faire résoudre des problèmes mathématiques simples, de niveau primaire, à l'intelligence artificielle. Ce projet aurait été couvert par Sam Altman qui l'aurait dissimulé au board. Ceux-ci craignant que ses développements puissent conduire cette intelligence artificielle à se reprogrammer et à se reproduire, ils auraient craint pour l'avenir de l'humanité, et renvoyé Sam Altmant.

Jusqu'ici, aucune intelligence artificielle ne parviendrait à résoudre un problème mathématique. Un jour après avoir reçu ce courrier, le board aurait licencié Sam Altman.

Reuters a eu vent du courrier par des sources internes, mais n'aurait pas pu se le procurer.

Bon, cela fait un peu rocambolesque, mais les avancées rapides de l'IA vue dans la dernière année incitent à la prudence.

Citer
OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

By Anna Tong, Jeffrey Dastin and Krystal Hu - November 23, 202310:52 AM GMT+1Updated 9 hours ago

Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

'VEIL OF IGNORANCE'

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.
https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/
« Modifié: 23 novembre 2023 à 21:39:49 par alain_p »

kgersen

  • Modérateur
  • Abonné Bbox fibre
  • *
  • Messages: 9 092
  • Paris (75)
Sam Altman démis de son poste de CEO d'OpenAI
« Réponse #13 le: 24 novembre 2023 à 17:13:52 »
Ceux-ci craignant que ses développements puissent conduire cette intelligence artificielle à se reprogrammer et à se reproduire, ils auraient craint pour l'avenir de l'humanité, et renvoyé Sam Altmant.

mais qui peut croire un truc pareil ? c'est du FUD niveau caniveau probablement destiné a rendre encore plus attrayant d'investir dans l'"IA"...

faut pas de se leurrer tout se brouhaha c'est pour valoriser openai et l'"IA" générative (qu'on devrait appelé "complétion générative de texte ou d'image" plutot qu'"Intelligence Artificielle"). Bref c'est du mega BS marketing pour vendre des abos chat-gpt et copilot chez MS et faire en sorte que les vannes du financement n'aillent pas voir ailleurs vers 'the next big thing'.

n'oublions pas qu'en terme de 'next big thing' on est passé en quelques années de:
- blockchain / crypto-monnaies / NFT
- Metaverse
- et maintenant "IA"...
et le plus souvent ce sont les memes acteurs , notamment la bande d'YC (https://fr.wikipedia.org/wiki/Y_Combinator ) qu'on retrouve fortement dans l'"IA". Si on fait le graphe social d'YC cela éclaire sur pas mal de chose... (sans tomber dans la théorie du complot non plus).

pour openai, on se retrouve avec les membres du board qui n'étaient pas "pro profits" éjectés et remplacés par des "copains" de la bande d'YC ou des gens influents en politique.
Je ne serais pas surpris qu'ensuite ils vont tout faire pour valoriser la société très fortement avant de la céder a MS ou l'introduire en bourse en battant un record.

Toutes ces histoires de 'safety' et autres AGI c'est du pipotron puissance 10. Et bien sur c'est pur benef pour la presse et clickbait.