Category: Artificial Intelligence

  • Is the EU AI Act in trouble? The OpenAI drama gives us some hints.

    The EU AI Act, aiming to be a significant piece of AI legislation, is currently facing uncertainty. The debate centers around how to regulate ‘foundation’ models, massive AI models like GPT-4, Claude, and Llama.

    Recently, the French, German, and Italian governments proposed limited regulation for these foundation models. Many argue that this move is influenced by heavy lobbying from Big Tech and open-source companies like Mistral, advised by Cédric O, a former digital minister for the French government. Critics view this as a “power grab” that could weaken the EU AI Act.

    On the other side, supporters of regulating foundation models within the EU AI Act are pushing back. Just yesterday, the Future of Life Institute, known for the six-month ‘pause’ letter, released another open letter urging the German government not to exempt foundation models. They argue that such an exemption would harm public safety and European businesses. Signatories include prominent AI researchers like Geoffrey Hinton and Yoshua Bengio, along with AI critic Gary Marcus.

    French experts, including Bengio, echoed these concerns in a joint op-ed in Le Monde, opposing what they see as attempts by Big Tech to undermine this crucial legislation in its final phase.

    So, why is the EU AI Act hitting a roadblock now? Despite being in the final stages of negotiation, two and a half years after draft rules were proposed, disagreements persist many months into negotiations. The EU AI Act focuses on high-risk AI systems, transparency for AI interacting with humans, and AI systems in regulated products. The current phase called the trilogy, involves EU lawmakers and member states negotiating the final details. The European Commission hoped to vote on the AI Act by the end of 2023, avoiding potential political impacts from the 2024 European Parliament elections.

    Surprisingly, the current drama involving OpenAI, whereby CEO, Sam Altman was ousted by the company’s non-profit board only to reinstate him again five days later after two other board members were dismissed and replaced, sheds light on similar controversies taking place within

    According to OpenAI, CEO Sam Altman and president Greg Brockman – two members of the non-profit board – supported this endeavor. Instead of investing in the development of AGI, they suggested the search for opportunities to make money. However, three non-employees on the board – Adam D’Angelo, Tasha McCauley, and Helen Toner, had greater concerns about AI “security” that went as far as shutting down OpenAI, rather than releasing a new technology that could possibly pose the risk Following Ilya Sutskever’s recruitment (pun intended), Altman got fired.

    Those three non-employee members were somehow linked to the Effective Altruism movement, as it turns out, with the whole lobbying effort around the European Union AI Act. Run by a president named Max Tegemark, the Future of Life Institute specifically deals with AI existential risk and also connects itself to effective altruism. Last week, the Wall Street Journal said that the Effective Altruism group has spent large amounts in telling people that artificial intelligence poses a big threat.

    It is hard to believe that the current drama involving OpenAI – where its CEO Sam Altman was fired by the nonprofit board only to be reinstated after five days following the removal of two other board members – unveils some key factors happening in the EU. Just as it happened in the case

    According to OpenAI, Sam Altman (CEO) and Greg Brockman (president), who have been members of OpenAI’s non-profit board, advocated for money-making ventures as a way of funding the creation of AGI. However, three other non-employees from the board; Adam D’Angelo, Tasha McCauley, and Helen Tonner are more into AI safety than AGI technologies which could result in something serious like the shutting down of FaceBook by the government. Following this, Altman got rid of chief scientist Ilya Sutskever (not a typo).

    All three non-employee members were linked with the Effective Altruism movement – and that goes back to lobbying over the EU AI Act. The Future of Life Institute is led by President Max Tegmark who is concerned with the problems created by AI existential risk and connects it to effective altruism. Last week, the Wall Street Journal stated that the EA community spent billions of dollars selling a perception that AI presents an extinction-level event.

    But Big Tech, including OpenAI, has been actively involved in lobbying as well. It’s important to remember that Sam Altman, the returning CEO of OpenAI, had sent mixed signals about AI regulation for months, especially in the EU. Back in June, a TIME investigation uncovered that while Altman was publicly advocating for global AI regulation, behind the scenes, OpenAI had lobbied to weaken “significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act.” Documents obtained by TIME from the European Commission through freedom of information requests revealed this effort to reduce regulatory constraints on the company.

    Gary Marcus recently pointed to the OpenAI situation as a reason for EU regulators to ensure that Big Tech doesn’t get the chance to regulate itself. Supporting the tiered approach backed by the European Parliament for handling risks linked to foundation models in the EU AI Act, Marcus emphasized on X: “The chaos at OpenAI only serves to highlight the obvious: we can’t trust big tech to self-regulate. Reducing critical parts of the EU AI Act to an exercise in self-regulation would have devastating consequences for the world.”

    Brando Benifei, one of the European Parliament lawmakers leading negotiations on the laws, highlighted the recent drama around Altman being ousted from OpenAI and subsequently joining Microsoft. He told Reuters last week, “The understandable drama around Altman being sacked from OpenAI and now joining Microsoft shows us that we cannot rely on voluntary agreements brokered by visionary leaders.”

    As of now, it still remains unclear if the EU AI act may actually be undone. This was stated by German consultant Benedict Kohn who penned down an analysis yesterday, pointing out that although additional negotiation rounds continue, “a deal must be made expeditiously

    Continuing, he pointed out that a fiasco for the EU AI Act would likely be a disheartening blow for all those concerned because the European Union has long portrayed itself as a leading figure in attempting to govern artificial intelligence.

    While negotiations continue, German advisor Benedikt Kohn pointed out in his analysis yesterday that there is still uncertainty surrounding the actual risk facing the EU AI Act. An agreement needs to be arrived upon before the trilogue scheduled for December 6 when Spain takes over as Council president and leaves it

    He further noted that a failure of the EU-AI Act would constitute a bitter blow to all those involved since the EU had already considered itself among the front-runner states with respect to regulating artificial intelligence.

  • The application of human-like techniques allows AI on edge devices to continue learning over time.

    The application of human-like techniques allows AI on edge devices to continue learning over time.

    With the PockEngine training method, machine-learning models can efficiently and continuously learn from user data on edge devices like smartphones.

    Such personalized deep-learning models could be used in intelligent computer chatbots that learn how to understand a particular user’s accent, while also using smart keyboards that are continually updated to adjust and effectively predict the next word each time based on a certain person’s typing history For this reason, there is need for repeated readjustment of a machine-learning model using fresh data to achieve the customization.

    The data from the users is usually sent over to the cloud server because such devices as smartphones do not possess enough memory space and computing power for fine-tuning such a detailed process. However, data transmission consumes much power, and the possibility of confidential users’ data leakage into external cloud storage can be regarded as a danger.

    A method was created by researchers from MIT, the MIT-IBM Watson AI Lab, and other places in which deep-learning models can easily adjust to novel sensor data right at an edge device level.

    The PockEngine is their on-device training method that finds out which components should be replaced within a large machine-learning module to raise precision and store only these particular fragments. This bulk of computation is executed as the model is being prepared, thereby reducing computational overhead during runtime and increasing the speed of fine-tuning.

    Compared with other approaches, PockEngine accelerated by a factor of 15 on some platforms. In addition, PockEngine did not degrade any model in terms of accuracy. The researchers also noticed that the fine-tuning of the popular AI chatbot was able to provide accurate answers to complex questions.

    On-device fine-tuning may be useful for privacy protection, cost reduction, customization capability, and even lifelong learning but its practicability is very dubious. The resources are limited hence everything has to be done within the confines of those resources. We would like to be capable of undertaking both, training as well as inference on edge devices. According to Song Han, an associate professor in EECS, a member of the MIT-IBM Watson AI Lab, a distinguished scientist at NVIDIA, and senior author of an open-access paper detailing PockEngine, “now we can”

    The paper is co-authored by lead author Ligeng Zhu, a PhD in EECS, together with other colleagues from MIT, MIT–IBM Watson AI Lab, and the University of California San Diego. This paper has just been published at the IEEE/ACM International Symposium on Microarchitecture.

    In the realm of deep learning, models are constructed based on neural networks, comprising interconnected layers of nodes, or “neurons.” These neurons process data to generate predictions. When the model is set in motion, a process known as inference takes place. During inference, a data input, such as an image, traverses through the layers until a prediction, like an image label, is produced at the end. Notably, each layer doesn’t need to be stored after processing the input during inference.

    However, during the training and fine-tuning phase, the model undergoes backpropagation. In backpropagation, the model is run in reverse after comparing the output to the correct answer. Each layer is updated as the model’s output approaches the correct answer. Since fine-tuning may require updating each layer, the entire model and intermediate results must be stored, making it more memory-intensive than inference.

    Interestingly, not all layers contribute equally to accuracy improvement; even for crucial layers, the entire layer may not require updates. These unessential layers or parts thereof don’t need to be stored. Moreover, there might be no need to trace back to the initial layer for accuracy enhancement; the process could be halted midway.

    PockEngine capitalizes on these insights to accelerate the fine-tuning process and reduce the computational and memory demands. The system sequentially fine-tunes each layer for a specific task, measuring accuracy improvement after each layer adjustment. This approach allows PockEngine to discern the contribution of each layer, assess trade-offs between accuracy and fine-tuning costs, and automatically determine the percentage of each layer that requires fine-tuning.

    Han emphasizes, “This method aligns closely with the accuracy achieved through full backpropagation across various tasks and neural networks.”

    Traditionally, generating the backpropagation graph constitutes heavy computational work that takes place at runtime. On its side, this process takes place at compile time before the PockEngine deploys it for use.

    PockEngine discards portions of text codes to eliminate superfluous sections or parts of layers forming an uncomplicated diagram for runtime. This reduces it to a graph which it then optimizes further for effectiveness.

    As everything just has to happen only once, there is minimal computational overhead at runtime.

    “It is as if you were preparing for a hike in the woods. You sit down at home and make all arrangements—which hiking routes do you plan to take, which will you skip over,” Han explains.”

    It trained models up to 15x faster than other solutions with no impact of accuracy when the researchers ported PockEngine to deep-learning models running on different edge devices like Apple M1 Chips, digital signal processors that typically are found inside modern smartphones and Raspberry Pi In addition, fine-tuning using PockEngine was performed at a vastly reduced memory requirement.

    Furthermore, the team employed the technique on a large language model Llama-V2. In big language models fine-tuning includes multiple examples and it is important for such models to know how to talk with people, Han says. It is also crucial for the models involved in complex problem-solving and solution reasoning.

    For example, when using PockEngine to fine-tune Llama-V2 models, they successfully answered the question, “What was Michael Jackson’s last album?” In contrast, models that weren’t fine-tuned failed to provide the correct answer. With the help of PockEngine, the time for each iteration of the fine-tuning process was reduced from about seven seconds to less than one second on an NVIDIA Jetson Orin, an edge GPU platform.

    Looking ahead, the researchers aim to leverage PockEngine for fine-tuning even larger models designed to handle both text and images simultaneously.

    “This research tackles the increasing efficiency challenges posed by the widespread adoption of large AI models like LLMs across various applications in diverse industries. It not only shows promise for edge applications incorporating larger models but also has the potential to reduce the cost associated with maintaining and updating large AI models in the cloud,” says Ehry MacRostie, a senior manager in Amazon’s Artificial General Intelligence division. MacRostie, who was not involved in this study, collaborates with MIT on related AI research through the MIT-Amazon Science Hub.

    Support for this work was provided, in part, by the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, the MIT-Amazon Science Hub, the National Science Foundation (NSF), and the Qualcomm Innovation Fellowship.

  • Programmers are increasingly adopting tools powered by artificial intelligence, and this shift is reshaping their approach to coding.

    Programmers are increasingly adopting tools powered by artificial intelligence, and this shift is reshaping their approach to coding.

    Since the spotlight embraced generative AI tools like OpenAI LP’s ChatGPT and Google LLC’s Bard, the world has been captivated by their conversational abilities and impressive utility. One of their early and prominent applications is assisting software developers in writing code.

    Following ChatGPT’s surge in popularity and its remarkable conversational and writing capabilities, numerous code-generating chatbots and developer tools have flooded the market. These include GPT-4, the same model behind the premium ChatGPT Plus service, Microsoft Corp.’s GitHub Copilot coding assistant, Amazon Web Services Inc.’s Code Whisperer, and others.

    According to a June survey by Stack Overflow involving over 90,000 developers, almost 44% reported using AI tools in their work, with an additional 26% expressing openness to using them soon. The tools were primarily employed for code production (82%), debugging and assistance (48%), and learning about specific codebases (30%).

    To understand how developers are integrating AI tools into their daily work, SiliconANGLE interviewed several developers. Two major coding tool modes have emerged: chatbots like ChatGPT that generate code snippets or check code, and coding assistants like GitHub Copilot that provide suggestions as developers write.

    For instance, Oliver Keane, an associate web developer at Adaptavist Group Ltd., uses ChatGPT as a coding companion to swiftly establish code foundations for projects. Keane provided an example of creating a content management system, where GPT was asked to generate a method for admins to update FAQs. He highlighted that, although he could code it himself, using GPT results in a solution 80% of the time on the first attempt.

    However, Keane emphasized the need for upfront work, as chatbots like ChatGPT require a conversational history or context to function effectively. Developers must build a rapport by prompting the tool with relevant code until it understands the project, after which it provides better responses.

    Once trained, developers can use these models for code reviews, bug discovery, and revisiting old work, essentially turning them into “pair programmers” for discussions on improving old code. Keane noted that, while the AI tool meets his needs 80% of the time, refining prompts can enhance its performance.

    Chatbots, besides aiding in coding tasks, also prove valuable for learning new frameworks and coding languages. Well-trained language models can serve as effective tutors, rapidly bringing developers up to speed, often surpassing traditional learning resources due to their conversational nature and interactive code discussions.

    AI code completion tools like GitHub Copilot have been a revelation for me and many other developers. They’re not chatbots – they’re like having an extra pair of hands at the keyboard. Copilot helps me write better code, even though I’m not writing most of it myself.

    The way this works is that Copilot suggests code snippets as I type. These snippets are often exactly what I need, and they force me to write my own code in a more descriptive and precise way. This is because if my code is too vague, Copilot will either go haywire or suggest irrelevant code.

    In the beginning, I found that Copilot would only suggest short stints of code, like a few lines at a time. But as I got used to using it, Copilot started suggesting longer and longer snippets, sometimes even 20 lines or more. This has saved me a lot of time and effort, and it’s given me more time to focus on other parts of my job, such as marketing and business development.

    In a recent survey of Stack Overflow users, 37% of professional coders said that the main benefit of using AI code completion tools is improved productivity. Another 27% said that greater efficiency is the main benefit, and 27% said that speed of learning is the main benefit.

    I can definitely see why productivity is the primary benefit for all types of developers. AI code completion tools can save you a lot of time and effort, which can free you up to focus on other things. And if you’re just learning to code, these tools can help you learn faster and more effectively.

    Overall, I’m a big fan of AI code completion tools. They’re not perfect, but they’re a valuable tool for any developer.

    While these AI tools can be incredibly helpful for developers, they’re not without their challenges. For instance, large language models may sometimes produce misinformation or “hallucinate,” generating problematic code. When using tools like chatbots, bugs might be caught by the compiler or a seasoned coder during a code review. However, code-suggesting AIs can introduce similar issues, requiring additional time to understand what went wrong after the fact.

    Reeve, for example, shared his experience with anxiety when GitHub Copilot generated a substantial amount of code at once. While it initially seemed like a time-saving boon, a bug emerged hours later due to a minor mistake in the AI-generated code. This highlights a certain level of uncertainty when the tool anticipates too far into the future.

    According to a Stack Overflow survey, only about 42% of developers trust AI models, with 31% expressing doubts and the rest having more serious concerns about the outputs. Some AI models may also be more prone to hallucinations than others.

    On the flip side, these tools enable rapid code production in ways not seen before, potentially sparing developers from tedious tasks. However, there’s a concern that overreliance on AI may hinder newer developers from fully grasping the fundamentals of programming languages.

    Jodie Burchell, a developer advocate at JetBrains, emphasizes that AI coding tools should be viewed as tools and assistants. Developers remain responsible for ensuring that the code aligns with their intentions, even if the AI provides imperfect guidance. Burchell underscores the importance of critical thinking, stating that there’s no shortcut to simply letting models develop code without scrutiny.

    The 2023 Accelerate State of DevOps Report from Google’s cloud division suggests that while AI tools slightly improve individual well-being, their impact on group-level outcomes, such as overall team performance, is neutral or even negative. This mixed evidence is attributed to the early stages of AI tool adoption among enterprises.

    Despite potential challenges, more developers are expressing interest in incorporating AI tools into their workflows, as indicated by the Stack Overflow survey. This trend is particularly notable among developers learning to code, with 54% showing interest compared to 44% of professionals. Veteran coders, on the other hand, tend to be less enthusiastic about adopting these new AI tools.

    In the realm of generative AI developer tools, it’s still early days, but adoption is progressing rapidly. Companies like OpenAI and Meta continuously enhance their models, such as GPT-4, Codex, and Code Llama, for integration into more tools. As these models evolve and become more ingrained in the development process, developers may find themselves spending a significant portion of their coding time collaborating with AI tools. Learning to provide effective prompts, maintaining precision coding for guiding predictive algorithms, and understanding the models’ limitations will likely be crucial for navigating this AI-centric future.

  • OpenAI: A shining example of what humans can achieve when they work together.

    OpenAI: A shining example of what humans can achieve when they work together.

    Five days after he was suddenly sacked, Sam Altman is going back to his old job as the boss of OpenAI, creator of the chatbot ChatGPT.

    Out of all the emojis used in my work messages today, an exploding head wins so far.

    Up till now, we don’t even have an idea why he was fired anyway. However, the management of Openai found one thing very disturbing they made an effort to eject him by using speed and discretion to ensure that people hardly knew about it.

    In a statement they implied he had somehow not been honest with them: accusing him of not always saying the truth as it is in his messages.

    However, with all these things at stake and the nomination of another two chief executive officers (CEO) within less than 24 hours afterward, an incredible outburst of support for Mr. Altman surfaced from among workers within the company — and has been very effective. All but one, every staff signed this letter saying they might resign if Mr Altman was not recalled. One of the signatories was chief scientist Ilya Setskov, who was part of the board that approved the initial action. Later, he tweeted to X (formerly, twitter) that he was sorry for having a hand in Mr. Altman’s exit.

    The exploding head emojis have taken over my work chats this morning.

    It is still unclear why he was dismissed in the first instance. However, OpenAI’s Board of Directors was so shocked by something that they took extreme measures to kick him out because they acted swiftly and secretly, sharing the information with a few people only.

    In a statement they implied he had somehow not been honest with them: labelling him a liar who does not communicate consistently and candidly.

    However, considering the gravity of everything above, and the appointment of two new CEOs in less than 24 hours one would imagine something opposite. However, there has been the most incredible outbreak of backing on the part of the same company for Mr Altman – and it is working out just fine. This is to say that all but one staff member signed a collective resignation letter and threatened to leave their jobs unless Mr Altman was not recalled. One of the signing boards was Chief Scientist Ilya Setskover, who is among the others. Thereafter, he published on X (formerly called Twitter) that he was sorry for causing the demise of Mr Altman.

    OpenAI’s week was a rollercoaster of emotions, with the company’s future hanging in the balance. It all started when Mira Murati, the interim CEO, posted a message on X saying “OpenAI is nothing without its people.” This was followed by a flurry of similar messages from other employees, accompanied by a sea of coloured hearts. The whole thing had an eerie resemblance to last year’s big Silicon Valley meltdown when Twitter staff sent a coded message of saluting emojis after Elon Musk’s takeover.

    Sam Altman, OpenAI’s co-founder, didn’t waste any time. By Monday, he had accepted a new job at Microsoft, OpenAI’s biggest investor. This move fueled speculation that Microsoft was effectively taking over OpenAI without staging a formal takeover. In the space of just two days, Microsoft had hired Altman, OpenAI co-founder Greg Brockman, and hinted at taking on other OpenAI employees who wished to join them.

    But in a surprising turn of events, Altman is back at OpenAI. This move is a testament to his power and influence, as he was able to walk back into the company despite having just accepted a job at its biggest investor. It’s also a sign of Microsoft’s commitment to OpenAI, as they were willing to let Altman return to the company without any strings attached.

    Only time will tell what the future holds for OpenAI. But one thing is for sure: the company is in for an exciting ride.

    What sense in this whole soap opera?

    We might be talking about the creators of pioneering technology but there are two very human reasons why we should all care: money and power.

    Let’s start with money. Last month, OpenAI was estimated to be worth $80n It seems like investment money keeps on coming in. However, its costs are also expensive. I was told that this involves a huge amount of calculating power and only a couple of pies each time somebody types in a question into the chatbot.

    Last month, a researcher in the Netherlands said to me that even the super-rich Google cannot afford to provide queries on its search engine at the same rate as chatbot queries because the cost is beyond its limits.

    Therefore, it does not come as a shock that OpenAI turned to money-generating endeavors to ensure its investors remain happy.

    And with money comes power. Despite this popcorn-grabbing distraction, let’s not forget what OpenAI is doing: building a technology that would in turn shape its world. AI is progressing very fast becoming more mighty every day, which only increases its potential danger. Just recently, Mr. Altman stated that the next version of ChatGPT for 2023 will be outdated when compared to the new version from OpenAI scheduled to launch shortly. Moreover, it is known that the current version can already pass the bar exam which newly licensed lawyers must

    Few people are driving this uncommon wave of disruption with Sam Altman being one of them. If it succeeds, then AI will be smarter and more efficient if all goes well; otherwise, there will be destruction. However, as these last five days have proven, innovation remains anchored on people’s power. Humans also can err. beginnetjeсыль: And so do the humans still have power of failing.

    I’ll leave you with my favorite comment about the whole debacle: “People building [artificial general intelligence]unable to predict consequences of their actions three days in advance”, wrote quantum physics professor Andrzej Dragan on X.

  • Nvidia and iPhone manufacturer Foxconn are teaming up to construct factories dedicated to artificial intelligence (AI).

    Nvidia and iPhone manufacturer Foxconn are teaming up to construct factories dedicated to artificial intelligence (AI).

    The world’s most valuable chip company Nvidia and iPhone maker Foxconn are joining forces to build so-called “AI factories”.

    Nvidia and Foxconn are teaming up to create a novel kind of data center powered by Nvidia chips, designed to support a broad range of applications. These applications include the training of autonomous vehicles, robotics platforms, and the operation of large language models. This collaboration comes amid the U.S. government’s recent announcement of plans to restrict advanced chip exports to China, posing a challenge for Nvidia.

    According to Nvidia, the new export restrictions will prohibit the sale of two high-end artificial intelligence chips, A800 and H800, specifically developed for the Chinese market. Nvidia’s CEO, Jensen Huang, and Foxconn’s chairman, Young Liu, made this joint announcement at Foxconn’s annual tech showcase in Taipei. Huang referred to the emerging trend of manufacturing intelligence, highlighting that the data centers powering it are essentially AI factories. He emphasized Foxconn’s capability and scale to establish these factories on a global scale.

    Liu expressed Foxconn’s ambition to transform from a manufacturing service company into a platform solution company, envisioning applications beyond AI factories, such as smart cities and smart manufacturing. The strategic use of Nvidia’s advanced chips in AI applications has significantly boosted Nvidia’s market value, surpassing $1 trillion and making it the fifth U.S. company to join the “Trillion dollar club,” alongside Apple, Microsoft, Alphabet, and Amazon.

    Simultaneously, Foxconn, known for producing over half of the world’s Apple products, is diversifying its business. In a June interview with the BBC, Liu highlighted electric vehicles (EVs) as a key growth driver for the company in the coming decades. The partnership between Foxconn and Nvidia, announced in January, focuses on developing autonomous vehicle platforms, with Foxconn handling the manufacturing of electronic control units based on Nvidia’s chips.

  • Sam Altman, the former head of OpenAI, plans to return just days after being removed from the position.

    OpenAI co-founder Sam Altman will return as boss just days after he was fired by the board, the firm has said.

    The tentative agreement involves bringing in new board members, according to the tech company. Sam Altman’s unexpected dismissal on Friday shocked industry observers, prompting staff to threaten mass resignations unless he was reinstated.

    Expressing his anticipation for a return to OpenAI in a post on X (formerly Twitter), Altman stated, “I love OpenAI, and everything I’ve done over the past few days has been in service of keeping this team and its mission together.”

    The board’s decision to remove Altman last week resulted in co-founder Greg Brockman’s resignation, plunging the prominent artificial intelligence (AI) company into chaos. The move was initiated by three non-employee board members—Adam D’Angelo, Tasha McCauley, and Helen Toner—along with a third co-founder and the firm’s chief scientist, Ilya Sutskever.

    However, on Monday, Sutskever apologized to X and endorsed the staff letter urging the board to reconsider. Microsoft, OpenAI’s largest investor and user of its technology in various products, then offered Altman a position to lead a “new advanced AI research team” at the tech giant.

    On Wednesday, OpenAI announced the tentative agreement for Altman’s return and a partial restructuring of the board that had dismissed him. Former Salesforce co-CEO Bret Taylor and former US treasury secretary Larry Summers are set to join current director Adam D’Angelo, as stated by OpenAI.

    In a post on X, Brockman also confirmed his return to the firm. Emmett Shear, OpenAI’s interim chief executive, expressed deep satisfaction with Altman’s return after “72 very intense hours of work.”

    Microsoft CEO Satya Nadella welcomed the changes to the OpenAI board, calling them a “first essential step on a path to more stable, well-informed, and effective governance.”

    Many staff members shared their enthusiasm online, with one employee, Cory Decareaux, stating on LinkedIn, “We’re back – and we’ll be better than ever.”

    However, some see the incident as damaging to OpenAI’s reputation, with potential implications for investors and recruitment. Nick Patience of S&P Global Market Intelligence told the BBC, “OpenAI can’t be the same company it was up until Friday night. That has implications not only for potential investors but also for recruitment.”

    Despite the challenges, some projects, like Be My Eyes, remain committed to OpenAI for its prioritization of accessibility, even though it may not have significant revenue implications for the company.

    The struggle started earlier as the board members fired Mr Altman and said that they had “no confidence” in him as a leader.

    His “truthfulness” was in doubt, and despite having gone through so many bends and turns since Friday, no one could tell what they were referring to.

    Regardless of the reason, there is no doubt that OpenAI’s staff was disappointed more than 700 of whom wrote a letter with an ultimatum to resign or they will quit.

    It was noted in a letter that Microsft had told them that if they wanted to leave and join the firm, they could have new jobs for everybody and later, the firm would give them the same pay that is offered at the moment.

    Mr Altman’s dramatic return seems to put an end to that threat.

    However, the disruption in the last couple of days has brought up questions as to why could the decision made by four individuals affect a Fortune 500 technology firm.

    This stems partly due to the model’s unique architecture and mission.

    In 2015, it started off as a non-profit with the aim of developing “safe artificial general intelligence for the benefit of all”. It did not have shareholder welfare or revenue maximization as its goals.

    in 2019, it created a for-profit subsidiary, but its objective stayed the same and the board of the nonprofit continued to hold power over it.

    Although it is unclear whether the crisis may have been caused by tensions over OpenAi’s future, it is notable that Mr. Altman reportedly did not make any promises for his recall.

    However, several observers have requested more clarification, with Tesla president Elon Musk requesting the directors to “say something”.But that has yet to happen. Reacting on X to the news of the reinstatement and new board, Ms Toner said no more than “And now, we all get some sleep”

  • The OpenAI meltdown: Winners and losers in the battle for AI supremacy.

    The OpenAI meltdown: Winners and losers in the battle for AI supremacy.

    In the recent OpenAI saga, the prevailing belief points towards Microsoft Corp. emerging as the significant victor. However, upon closer inspection, our perspective diverges from this conventional wisdom.

    As of last Thursday, before the dismissal of OpenAI Chief Executive Sam Altman and the ensuing public drama, both Microsoft and OpenAI find themselves in a less favorable position. Despite holding a substantial lead in market momentum, artificial intelligence adoption, and feature acceleration, the recent events have jeopardized their standing and influence in the field of AI.

    Engaging in conversations with customers and industry insiders has led us to a noteworthy conclusion: the formidable duo has inadvertently placed its considerable lead at risk. While Microsoft CEO Satya Nadella is skillfully making the best of a challenging situation, the competitive landscape has shifted. It has become increasingly apparent that relying solely on one large language model to dominate the AI scene is no longer a given.

    In this Breaking Analysis, we delve into the repercussions of the OpenAI meltdown, focusing on the customer perspective and how it reshapes the competitive dynamics in the ongoing battle for AI supremacy. Additionally, our exceptional data team at Enterprise Technology Research conducted a swift survey among OpenAI Microsoft customers to capture their reactions, and we are eager to share this fresh and insightful data.

    The dynamics of the Microsoft-OpenAI alliance, once a powerhouse, now face a critical juncture. The implications of these recent events extend beyond corporate boardrooms and into the broader AI community, raising questions about the trajectory of AI development and the role of industry giants in shaping this future.

    As we navigate through this unfolding narrative, it becomes clear that the fallout from the OpenAI saga extends beyond the corporate realm. It is a shared concern for customers, industry observers, and enthusiasts alike. The evolving landscape prompts us to reevaluate the assumptions we held about the dominance of a single model and underscores the importance of adaptability and resilience in the rapidly changing world of AI.

    The race for AI supremacy has entered a new phase, one marked by uncertainty and the repositioning of key players. Microsoft and OpenAI, once at the forefront, now find themselves adapting to unforeseen challenges. The narrative is no longer solely about technological prowess but also about how companies navigate adversity and regain lost ground.

    In the coming days, as the dust settles from the OpenAI meltdown, the industry will witness strategic shifts and renewed efforts from these giants to reclaim their positions. How well they respond to the concerns and expectations of their customer base will play a pivotal role in determining their success in the next chapter of the AI saga.

    As we reflect on these developments, it is evident that the story of AI supremacy is far from over. The twists and turns, challenges, and adaptations in response to setbacks are all part of a narrative that continues to captivate and shape the future of technology. The resilience of the industry, the creativity of its leaders, and the evolving needs of customers will collectively define the next era in the ongoing battle for AI supremacy.

    In the spirit of transparency and insight, our data team at Enterprise Technology Research conducted a rapid survey among OpenAI Microsoft customers. The responses provide a valuable snapshot of sentiments within the user community, shedding light on how these events are perceived by those directly impacted. The results of this survey offer a glimpse into the real-world implications of the OpenAI saga and will be crucial in understanding the evolving landscape of AI adoption and customer expectations.

    In conclusion, the OpenAI meltdown has injected a dose of unpredictability into the race for AI supremacy. The once-unchallenged dominance of Microsoft and OpenAI now faces scrutiny, and the narrative is evolving beyond a simple win or lose scenario. It is a testament to the dynamic nature of the tech industry, where resilience and adaptability are as crucial as technological prowess. The story continues, and the next chapter promises to be as intriguing as the developments that have led us to this point.

    As we collectively navigate through these uncertain waters, it is a reminder that the pursuit of AI supremacy is not a linear journey. It is a nuanced, ever-evolving saga that demands careful consideration, adaptability, and a commitment to the shared goal of advancing technology for the benefit of humanity.

    OpenAI, initially established as a nonprofit in 2015, set out with the admirable goal of creating safe artificial general intelligence for the benefit of humanity. Typically, such initiatives receive government funding, but in this case, private industry players opted for a 501(c)(3) structure to best serve the mission. However, despite the initial $1 billion funding from its founders, OpenAI found itself unable to adequately fund its ambitious goals.

    In 2019, OpenAI introduced a structure to facilitate external investments, including from its long-time partner, Microsoft. This collaboration, dating back to 2016, expanded to include an LLC with an operational role owned by the nonprofit. This LLC, in turn, is part of a holding company that enabled OpenAI to attract outside investments.

    Notable investors, such as Khosla Ventures, Tiger Global, Thrive Capital, Sequoia Capital, and Andreessen Horowitz, joined the journey. Microsoft invested around $1 billion during this time, but the exact details of this investment, whether in the holding company or the capped-profit company where Microsoft holds a 49% stake, remain unclear.

    Breaking down this structure, it seems the holding company safeguards certain assets, possibly the intellectual capital related to artificial general intelligence, which Microsoft doesn’t have access to. Microsoft’s 49% ownership of the capped-profit company is tied to substantial contributions, up to $13 billion, and is contingent on OpenAI meeting specific milestones.

    Microsoft holds the right to create derivative works from OpenAI intellectual property, potentially incorporating it into products like Bing, with protections in place in case of OpenAI insolvency.

    However, this intricate structure faces challenges. Microsoft, in pursuit of exclusive access to OpenAI’s GPT 3.5 and GPT 4 technology, seemingly overlooked governance risks. This lapse allowed a board of directors to attempt the ousting of OpenAI’s CEO, resulting in a company in disarray.

    While OpenAI’s success has been remarkable, the recent turmoil raises questions about its future trajectory. The Emerging Technology Survey indicates positive sentiments toward OpenAI, but the complex fallout from recent events has implications for the company, its employees, investors, and the board of directors.

    In the broader landscape, other players like Amazon Web Services, Google, Anthropic, Tesla, IBM, Meta Platforms, Oracle, Salesforce, Dell Technologies, and Hewlett Packard Enterprise are making strides in the AI field, benefiting from OpenAI’s stumble.

    The losers in this scenario include OpenAI, its employees, investors, and the board of directors, who had a $90 billion valuation within their grasp. The customers are caught in the crossfire, and the entire conversation around accelerating AI and trusting the tech industry to ensure AI safety has taken a hit.

    Despite Microsoft’s strategic moves, the situation is far from ideal. The narrative of Microsoft as the AI puppet master has shifted, and the company now faces the challenge of piecing everything back together. The potential return of Sam Altman to OpenAI adds another layer of uncertainty, raising questions about productivity, controls, and the potential impact on Microsoft.

    In conclusion, the OpenAI saga underscores the intricate dynamics of the AI industry. While some companies stand to benefit in the short term, the fallout raises broader questions about the direction of AI development, trust in the industry, and the resilience of major players like Microsoft and OpenAI.

  • Nvidia and Genentech join forces to enhance drug discovery using the power of AI, bringing a human touch to the forefront of scientific innovation.

    Nvidia and Genentech join forces to enhance drug discovery using the power of AI, bringing a human touch to the forefront of scientific innovation.

    Nvidia and Genentech are teaming up for an exciting, long-term collaboration that aims to revolutionize drug discovery through the cutting-edge capabilities of artificial intelligence, including generative AI.

    In simple terms, Nvidia wants to supercharge Genentech’s advanced AI research programs by turning its generative AI models and algorithms into a powerful “next-gen AI platform.” The goal? To speed up the discovery of new therapies and medicines.

    Genentech already boasts its own machine learning algorithms, but it’s turning to Nvidia to level up. They plan to utilize Nvidia’s DGX Cloud, a training-as-a-service platform that harnesses the AI prowess of Nvidia’s specialized hardware and software, including Nvidia BioNemo.

    BioNemo, now available as a training service, is a specific platform designed to simplify, accelerate, and scale up generative AI applications for computational drug discovery. With the help of Nvidia’s DGX Cloud, researchers can pretrain or fine-tune advanced models.

    Nvidia isn’t just providing technology; they’re sharing their computing expertise. They’ll collaborate closely with Genentech’s computational scientists to optimize and scale up AI models. This collaboration isn’t just a win for Genentech; it might also lead to improvements in Nvidia’s platforms.

    Genentech acknowledges that drug discovery is a lengthy and complex process with a high level of uncertainty. AI, they believe, can be a game-changer, making drug discovery more predictable, cost-effective, and boosting the long-term success rate of research and development programs.

    Aviv Regev, EVP and Head of Research and Early Development at Genentech, is enthusiastic about unlocking scientific discoveries rapidly. He sees the collaboration with Nvidia as a way to optimize drug discovery and development for treatments that can transform lives.

    Genentech’s AI teams are actively working on foundational models in various research areas, aiming to gain insights for target and drug discovery. Their “lab in a loop” initiative, focused on using experimental data to enhance computational models, will benefit from Nvidia’s platforms.

    This collaboration allows Genentech to assess predictions more rapidly and continually improve its models, ultimately leading to more effective therapies.

    Crucially, the collaboration respects data privacy. Nvidia won’t access Genentech’s data without explicit permission for a specific project.

    This partnership reflects the fusion of science and technology, a cornerstone of breakthroughs in biomedical research.

  • How AI is being used in production lines at factories.

    As Doritos, Walkers and Wotsits speed along a conveyor belt at Coventry’s PepsiCo factory – where some of the UK’s most popular crisps are made – the noise of whirring machinery is almost deafening.

    However, in this case, it is not any old human workers looking for sign of breakdown beyond all the noise of the factory.

    In addition, sensors connected to different elements of devices also listen for cues on hardware flaws, having learnt what old and tired machines sound like that threaten to stop production lines completely.

    After a successful USA trial, these sensors created by tech company Augury and fueled with artificial intelligence are being deployed by PepsiCo across its factories.

    One of the companies trying out ways AI may help boost factory efficiency without wastage or delays before retail shelf is one such example from what are increasingly seen as a wave of new manufacturing.

    The large quantities of data that it can handle has already made its mark in helping manufacturers in preemptive measures against disruptive tendencies.

    Factory downtime for a single period of one minute could run into tens of thousands in lost earnings. Additionally, if additional delays lead to missed opportunities during special times such as Christmas or even during Black Friday then some consumers may miss their window of purchase.

    Therefore, monitoring devices are increasingly commonplace on factory floors, capable of checking and examining processes in real time; anticipating forthcoming problems and exploiting recorded history to suggest corrective measures.For instance, the sensors applied by PepsiCo have been trained on vast amounts of sound data in

    “Today we analyse and monitor more than 300 million machine hours and we are able to exploit all that data to devise algorithms capable of locating particular failure patterns”, said Augury’s CEO, Saar Yoskovitz.

    In simple terms, factories are using smart technology to gather information about the health of their machines. This helps workers predict when a machine might break down, allowing them to schedule maintenance beforehand and prevent unexpected errors. Using AI-powered sensors also helps reduce waste in the production process.

    For instance, AI technology like computer vision is being employed to identify defects in products as they move along conveyor belts in factories. This is especially crucial for intricate items like computer chips, where tiny defects might be overlooked by the human eye. AI, through cameras and algorithms, can spot these defects and ensure better product quality.

    According to Alexandra Brintrup, a professor at the University of Cambridge, AI is not only improving efficiency in manufacturing but also opening up new possibilities. These include sharing production capacities between manufacturers, enhancing visibility in supply chains, and even optimizing logistics, like sharing trucks.

    One significant use of AI is in unraveling the complexities of supply chains. By analyzing and predicting suppliers’ information, companies can identify bottlenecks, and consumers can gain insights into the origins and materials used in products. For example, AI can help track the use of ingredients like palm oil, even when disguised under different names on product labels. This transparency is crucial as society becomes more environmentally and socially conscious.

    In summary, AI is revolutionizing manufacturing by making processes more efficient, improving product quality, and increasing transparency in the supply chain.

    This is especially true as it refers to the manufacturing landscape where they ask themselves what implications the increasing use of AI tools on factory floors and other places within the wider supply chain will have for workers.

    Consequently, some companies examine the implementation of AI in monitoring the security of production line workers who are around machines with machine-learning and computer vision techniques by analyzing factory cameras’ footage.

    In the meantime, companies such as PepsiCo are deploying wearables powered by artificial intelligence across warehouses in Britain in a bid to avoid injuries among the employees who are repeatedly tasked with carrying heavy loads. David Schwartz, PepsiCo’s global vice president at

    He said “it’s improving how people work, so we may give more efficiency for our people, or customers and get ready to face up the future to provide them services on a day to day basis”.

    Will the emerging use of AI devices in factories and elsewhere in the supply chain mean anything to workers?

    A few firms are examining ways through which artificial intelligence can be used for enhancing safety of production line workers in close proximity of machines – by applying machine learning, and computer vision approaches to monitor the manufacturing facility’s camera video feed to find likely accidents, risks, etc.

    On the other hand, AI wearable technologies such as exoskeleton have been rolled out in United Kingdom’s warehouses to prevent those involved in repetitive lifting of bulky materials from suffering overstraining and bodily injury.David Schwartz, PepsiCo Labs

    He states: “it is assisting us in improving how people work to achieve more efficiency to serve the people we have, our customers, and prepare us for tomorrow to cater to them every day”.

  • Sam Altman let go from Open AI as the board discovered he wasn’t always straightforward in communication.

    Sam Altman will step down from his role as chief executive of Open AI LP after the company found that he “was not consistently candid” with the board.

    There are no details as to how those things could have happened, and neither is there a date or an announcement of this abrupt leadership change from the company’s side. Mira Murati, the OpenAI chief technology officer, will function as interim CEO while the replacement is being sorted out. The search for candidates has already been initiated by the company. Greg Brockman, also resigned as the president of OpenAI on Friday.

    The statement added that Altman left following a board review after the company concluded its performance analysis and compensation committee review. Therefore, it said that “he did not provide full disclosure to the board on regular basis and therefore it cannot be able to perform its mandate”. “OpenAI does not have any confidence in his leadership abilities.”

    Some observers say OpenAI’s approach, is rather a strange one. Boards of leading tech companies rarely sack CEOs, and most times the circumstances surrounding this high profile removal stay undisclosed.

    And it was sudden: However, the New York Times noted that although Altman had been in Oakland, California, on Thursday evening and was hosting an event there, he did not give any sign of his looming exit. Apparently, the decision did not even have prior notice given to microsoft before it was made.

    Certain of the board’s concerns toward Altman could not be found during Friday afternoon, and according to later news in the evening, it was the fact that the board, specifically the chief scientist of Ilya Sutskever (the question regarding safety is growing on AI models) considered Alt.

    Artificial Intelligence Developer announces new leadership change! As an interim measure, Mira Murati, OpenAI’s chief technology officer, will step in as an acting CEO, until a substantive replacement is found. It is already searching for suitable candidate. Greg Brockman, president of OpenAI, said he’s also quitting on Friday.

    According to openai, altman left after a board of directors review. Accordingly, the company mentioned that “his lack of openness affected the board’s duties of governance on him”. That is, the board did not believe in his capability to proceed managing OpenAI.

    However, it was a bit odd for some observers on how the OpenAI handled this situation. CEOs of successful tech companies are rarely forced out by directors, and the reason why is often never made public.

    And it was sudden: According to the New York Times, Altman did not give any hints that he was leaving since he appeared at a function in Oakland, California on Thursday night. Apparently, the company only informed Microsoft about this just several hours before the agreement took place.

    Specific problems of the board with Altman were not obvious Friday evening, but the later reports at night stated that, above all the Chief Scientist Ilya Sutskever, thought that Altman was speeding up and did not pay enough attention to the safety.

    Certainly, the brutal nature of its presentation indicates problems other than difficulties in the commercial area.yahrpage. Tech journalist and podcast host Kara Swisher heard from sources that Open AI’s profit and the opposers did not match and the developer day was also not right. Additionally, they pointed out that there could be more significant departures.

    An amusing possible clue was actually in Altman’s announcement upon leaving the firm, in the final sentence. While Open AI has grown massively, it continues to be the primary responsibility by the board to advance Open AI’s mission and maintain the principles embodied in the charter. It is a short note where Open AI pledges ensuring safety to any AGI system the company may invent in future.

    In a subsequent post on X, Swisher suggested this might be at the crux of the clash: According to her: “I heard chief scientist Ilya Sutskever drove it. He got the board to support him against Sam Altmann and Greg Brockman who were pushing too much and fast as they wanted to be seen as leaders by making changes to the direction and growth of Open My bet: By Monday, he will have a newly established business running.

    Another story in The Information claimed that Altman had been deemed quick on making decisions and overlooking safety matters. However, others think there must be something else behind this, taking into account the strong tone of the board press release.

    On the other hand, Altman wrote on X that “I liked my experience at Open AI”. this is what was truly transformative for me and may be some change in the world a wee bit. I enjoyed working with such great people above everything else. They shall have even more to say concerning things to come in future.

    Later that evening, Brockman took to X, pouring out his feelings about the board’s decision. He expressed shock and sadness, assuring everyone that he and Sam were taken aback by the sudden move and hinted at something greater on the horizon.

    Adding more insight, Swisher shared thoughts on X during the late hours, shedding light on the dissenting board members’ perspective. They believed Altman to be manipulative and headstrong, wanting to forge his own path. Swisher mused that this might be typical behavior for a Silicon Valley CEO, but the uniqueness of OpenAI as a company raised questions. There was a call for transparency, urging the board to provide specifics about their concerns and evidence of attempts to communicate with Altman before making such a drastic decision. The situation, if mishandled, appeared clumsy.

    The roster of OpenAI’s board includes notable figures such as Sutskever, Quora Inc. CEO Adam D’Angelo, tech entrepreneur Tasha McCauley, and Helen Toner from the Georgetown Center for Security and Emerging Technology. Until today, Sam Altman and Brockman held director positions. Although Brockman announced his departure from the board, the company clarified that he would continue working at OpenAI in his existing role.

    OpenAI, founded in 2015 with a substantial $1 billion backing, aimed to pioneer advanced AI models. Altman, a founding team member, took the reins as CEO in 2019, having previously led the Y Combinator startup accelerator.

    Under Altman’s leadership, OpenAI achieved significant milestones, launching the widely acclaimed ChatGPT chatbot. This success attracted attention, leading to a $10 billion investment from Microsoft in January at a $29 billion valuation. By October, OpenAI boasted an annualized revenue run rate of $1.3 billion and aspired to be valued as high as $86 billion.

    Microsoft’s considerable investment in Open AI has now made the leadership change a pivotal moment for the tech giant. Speculation arose about Microsoft taking proactive measures to manage any potential fallout, with a 1.7% drop in its stock possibly reflecting the impact of the Open AI news. The situation remains dynamic, and the industry is keenly watching how it unfolds.

    Sarbjeet Johal, a tech analyst, pointed out that there seemed to be tension between OpenAI and Microsoft during the AI event last week. It was also surprising that Altman didn’t show up at Microsoft’s conference this week, especially considering the recent announcement of the ChatGPT-driven Copilot across various Microsoft products. Johal mentioned that OpenAI faced competition from open-source generative AI models, including Meta Platforms Inc.’s Llama.

    OpenAI’s new interim CEO, Murati, has been with the company since 2018. In her role as CTO, she played a key role in developing ChatGPT and other products like the DALL-E series for image generation. Before becoming CEO, Murati already led three crucial units at OpenAI: research, product, and AI safety.

    Brockman, in a statement on X, expressed shock and sadness about the board’s decision. He and Sam thanked everyone they worked with, acknowledging the support received. They were still trying to understand the situation. According to Brockman, Sam learned about his firing in a Google Meet with the board, excluding Greg. Around the same time, Greg received a message from Ilya, leading to his removal from the board while retaining his role. OpenAI published a blog post simultaneously. The management team, except for Mira who found out the night before, learned of the situation shortly after. Brockman reassured everyone of their well-being and hinted at exciting things to come.

    Altman, sharing on X, described the day as a strange experience, akin to reading his own eulogy while still alive. He expressed gratitude for the outpouring of love and encouraged people to appreciate their friends.

    In response to the unfolding events, Matei Zaharia, CTO of Databricks Inc., mentioned on X that they were hiring for AI research positions, emphasizing their mission to democratize AI.