By Anjan Roy
Sam Altman has become globally known overnight not because of what he had achieved already, which was seminal, but because he was summarily sacked by the board of directors of the startup firm he had established.
Sam Altman had become a legend at his young age for his role in developing software for artificial intelligence, known as large language model (LLM). On that base, his firm, Open AI, had developed “ChatGPT”. This is a versatile AI model which was the wonder of the brave new technology regime.
And now, virtually the creator of this system was thrown out by the board of directors of the company which he headed and had produced the ChatGPT software. On the face of it, this was a personality clash between the head of the firm, Altman, and its chief scientific officer. The reason given for sacking Altman was that his communications with the board were misleading and did not provide adequate information.
Sacking Sam immediately resulted in tanking of the share prices of Open AI. The investors turned wildly against the board and asked it to recall Altman into the company. Other staff members were also upset and in a letter they demanded that let Sam Altman be brought back. The board was humiliated so to say.
But then, by the time the investors and staff reacted, something else had happened. Microsoft, which had backed Altman’s OpenAI, reacted with lightening speed to the news and directly recruited him to run an in-house AI team within Microsoft. Sam Altman was announced the CEO of Microsoft’s own AI outfit.
That was a double-edged sword for Microsoft. The software giant had earlier spent $3 billion to prop up OpenAI as a start up and had eagerly snapped up ChatGPT software to provide, along with its more traditional and long-established software products.
There is an issue for Microsoft. Even after spending as much as $3 billion on this company, was Microsoft sleeping while the storm was gathering in OpenAI. Such a major investor in the company did not have a whiff of a idea of what was happening in the company. Microsoft investors would certainly raise the issues.
The fierce gale winds over removal of a single man is indicative of the nature of the emerging high technology world we are now witnessing. This is the post-industrial world we are entering. Here it is not the assembly line products like making of a car. This new world order is centred around individuals and ideas. A single idea can change the familiar world. A single idea can have mind-blowing commercial value.
All AI is, at the root, statistical recession analysis. If you are a smart phone user and writing a message. You will be constantly promoted to write the next word by the software embedded in your smartphone. More often than not, if you are using a high-level one, the prompt word would exactly be the one you would have thought of or is most appropriate.
How does the machine prompt you so. It does because underlying the software in the phone, it has gone through a huge base of words which are used in written or spoken language. Based on its statistical analysis of the words it has studied, running into millions, the language model had calculated the likelihood of the next word coming into the sequence.
Sam Altman’s firm, Open AI, has perfected this technology to such a level that the software system can just not only predict the next word to be used. It will be able to give you, say, the answer to a law examination question (in one case, the American Bar Association entrance exam) so perfectly that the software could pass the examination with ease.
Normally, this examination has been found to be so tough that only a handful of people could qualify after very long preparation. ChatGPT of Sam Altman’s firm could pass in no time. That exemplifies the power of the system which had been developed by tis firm under its enigmatic leader, Sam Altman.
When the ChatGPT product was announced and released it has created sensation throughout the tech world. Nothing similar in its raw power had been developed so far. The system can be useful for a variety of uses, including, say, in medicine. Because, the system could master the entire knowledge in a specific area, say, pulmonary diseases, available thus far and relate these to massive data base of case studies and then diagnose an individual case promptly.
Journalists had felt threatened once ChatGPT was released. The fear was that the AI system could be asked to produce an article on, say, climate crisis by a journalist A. ChatGPT would be trained to ingest the style and manner of writing of a journalist, along with what else had appeared so far, and then could produce an article in no time that a journalist could take much longer. In that case, newspapers can get rid of the journalists and bring out a paper without having any journalists.
In fact, it did lead to a prolonged industrial action in Hollywood in America. The actors and screenplay writers had stuck work for months until only recently fearing that their existing contracts could be tweaked to produce screenplays or even acting clips without the screenplay writers or actors having ever written of acted those parts.
This should be possible with AI software which will gorge on the earlier works of the writers or actors identified and should then pour forth newer items based on the study of the past performances. All that the AI system needs is some former clips of the writers’ or actors’ works.
In fact, we recently had a nightmare demonstration of the powers of such AI. View clips of a well-known actress were used to produce a “fake” scene in which the actress was acting or saying things which she never did. At the same time, it was so perfect that nobody could have detected it was fake and produced by an AI system.
These kinds of AI could play havoc. It can destroy an entire political set up. Imagine, a leading politician’s past clips are used and then on that basis a speech is crafted fomenting fire against some people or causes or issues. It could be ruinous.
AI is thus crafting a dangerous new world. Many experts are wondering that AI can eventually surpass human intelligence and create havoc with our known world. These need to be regulated. In fact, one critical issue which led to the rift between Sam Altman and the OpenAI board was that the funder promoter was not open to considering the tragic consequences of the possible monsters AI was creating. (IPA Service)