Blog

  • A report is cautioning that AI has the potential to exacerbate cyber threats.

    According to a report from the UK government, artificial intelligence might heighten the risk of cyber-attacks and undermine trust in online content by 2025.

    The technology could potentially be used in planning biological or chemical attacks by terrorists, according to the report. However, some experts have raised doubts about whether the technology will develop as projected.

    Prime Minister Rishi Sunak is expected to discuss the opportunities and challenges presented by this technology on Thursday.

    The government’s report specifically examines generative AI, the kind of system that currently powers popular chatbots and image generation software. It draws in part from declassified information provided by intelligence agencies.

    The report issues a warning that by 2025, generative AI could potentially be employed to “accumulate information related to physical attacks carried out by non-state violent actors, including those involving chemical, biological, and radiological weapons.”

    The report points out that while companies are striving to prevent this, the effectiveness of these protective measures varies.

    Obtaining the knowledge, raw materials, and equipment required for such attacks poses obstacles, but these barriers are diminishing, potentially due to the influence of AI, as per the report’s findings.

    Furthermore, it predicts that by 2025, AI is likely to contribute to the creation of cyber-attacks that are not only faster but also more efficient and on a larger scale.

    Joseph Jarnecki, a researcher specializing in cyber threats at the Royal United Services Institute, noted that AI could aid hackers, particularly in overcoming their challenges in replicating official language. He explained, “There’s a particular tone used in bureaucratic language that cybercriminals have found challenging to emulate.”

    The report precedes a speech by Mr. Sunak scheduled for Thursday, where he is expected to outline the UK government’s plans to ensure the safety of AI and position the UK as a global leader in AI safety.

    Mr. Sunak is expected to express that AI will usher in new knowledge, economic growth opportunities, advancements in human capabilities, and the potential to address problems once deemed insurmountable. However, he will also acknowledge the new risks and fears associated with AI.

    He will commit to addressing these concerns directly, ensuring that both current and future generations have the opportunity to benefit from AI in creating a better future.

    This speech paves the way for a government summit set for the next week, focusing on the potential threat posed by highly advanced AI systems, often referred to as “Frontier AI.” These systems are believed to have the capacity to perform a wide range of tasks, surpassing the capabilities of today’s most advanced models.

    Debates about whether such systems could pose a threat to humanity are ongoing. According to a recently published report by the Government Office for Science, many experts consider this a risk with low likelihood and few plausible pathways to realization. The report indicates that, to pose a risk to human existence, an AI would need control over crucial systems like weapons or financial systems, the ability to enhance its own programming, the capacity to avoid human oversight, and a sense of autonomy. However, it underscores the absence of a consensus on timelines and the plausibility of specific future capabilities.

    Major AI companies have generally acknowledged the necessity of regulation, and their representatives are expected to participate in the summit. Nevertheless, Rachel Coldicutt, an expert on the societal impact of technology, questioned the summit’s focus. She noted that it places significant emphasis on future risks and suggested that technology companies, concerned about immediate regulatory implications, tend to concentrate on long-term risks. She also pointed out that the government reports are tempering some of the enthusiasm regarding these futuristic threats and highlight a gap between the political stance and the technical reality.

  • “Rishi Sunak acknowledges the threats and risks posed by AI while also highlighting its potential.”


    “Prime Minister Rishi Sunak has cautioned that artificial intelligence could potentially facilitate the development of chemical and biological weapons.”

    “In a worst-case scenario, Mr. Sunak warned that society might lose all control over AI, rendering it impossible to deactivate. While there’s ongoing debate about the extent of potential harm, he stressed the importance of not ignoring AI risks.

    In a speech aimed at positioning the UK as a global AI leader, the Prime Minister highlighted that AI is already generating employment opportunities. He acknowledged that AI’s advancement would stimulate economic growth and enhance productivity, even though it would impact the job market.

    During the Thursday morning address, the Prime Minister outlined both the capabilities and potential risks associated with AI, including concerns like cyber attacks, fraud, and child sexual abuse, as detailed in a government report.

    Mr. Sunak emphasized that one of the report’s concerns was the potential use of AI by terrorist groups to amplify fear and disruption on a larger scale. He stressed the importance of making safeguarding against the threat of human extinction from AI a global priority. However, he also added, ‘There is no need for people to lose sleep over this at the moment, and I don’t want to be alarmist.’”

    He expressed his “optimism” regarding AI’s potential to enhance people’s lives.

    A more immediate concern for many individuals is the impact AI is currently having on employment.

    Mr. Sunak highlighted how AI tools are effectively handling administrative tasks, such as contract preparation and decision-making, which were traditionally human roles.

    He emphasized that he believed education was the key to equipping people for the evolving job market, noting that technology has always transformed how people earn a living.

    For example, automation has already altered the nature of work in factories and warehouses, but it hasn’t completely eliminated human involvement.

    The Prime Minister argued against the simplistic notion that artificial intelligence would “take people’s jobs,” instead encouraging the public to view the technology as a “co-pilot” in their daily work activities.

    Numerous reports, including declassified information from the UK intelligence community, have outlined a series of concerns regarding the potential threats posed by AI in the coming two years.

    According to the “Safety and Security Risks of Generative Artificial Intelligence to 2025” report by the government, AI could be used to:

    1. Strengthen terrorist capabilities in areas such as propaganda, radicalization, recruitment, funding, weapons development, and attack planning.
    2. Increase occurrences of fraud, impersonation, ransomware, currency theft, data harvesting, and voice cloning.
    3. Amplify the distribution of child sexual abuse imagery.
    4. Plan and execute cyberattacks.
    5. Undermine trust in information and employ ‘deepfakes’ to influence public discourse.
    6. Gather knowledge on physical attacks carried out by non-state violent actors, including the use of chemical, biological, and radiological weapons.

    Opinions among experts are divided when it comes to the AI threat, as previous concerns about emerging technologies have not always materialized to the extent predicted.

    Rashik Parmar, CEO of The Chartered Institute for IT, emphasized that AI will not evolve like the fictional character “The Terminator.” Instead, with the right precautions, it can become a trusted companion in our lives from early education to retirement.

    In his speech, Mr. Sunak stressed that the UK would not hastily impose regulations on AI due to the complexity of regulating something not fully understood. He advocated a balanced approach that encourages innovation while maintaining proportionality.

    Mr. Sunak aims to position the UK as a global leader in AI safety, despite not being able to compete with major players like the US and China in terms of resources and tech giants.

    Most of the Western AI developers are currently cooperating, but they remain secretive about the data their tools use and their underlying processes.

    The UK faces the challenge of persuading these firms to adopt a more transparent approach rather than, as the Prime Minister put it, “marking their own homework.”

    Professor Carissa Veliz, an associate professor in philosophy at the University of Oxford, pointed out that the UK has been historically reluctant to regulate AI, making Sunak’s statement about the UK’s leadership in AI safety interesting. She added that regulation often drives impressive and crucial innovations.

    The Labour Party criticized the government for not providing concrete proposals on how it plans to regulate the most powerful AI models and called for action to ensure public protection.

    The UK is hosting a two-day AI safety summit at Bletchley Park in Buckinghamshire, with China expected to attend. This decision has sparked criticism due to tense relations between the two countries. Former Prime Minister Liz Truss has requested the rescission of China’s invitation, emphasizing the importance of working with allies.

    However, Mr. Sunak defended the decision, asserting that a serious strategy for AI requires engagement with all leading AI powers worldwide.

    The summit will bring together world leaders, tech companies, scientists, and academics to discuss emerging technology. Professor Gina Neff, Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, criticized the summit’s focus, arguing that the most pressing concerns, such as digital skills and the responsible use of powerful AI tools, are not adequately addressed.

    She emphasized that this poses risks for individuals, communities, and the planet.

    British Prime Minister Rishi Sunak delivers a speech on AI at Royal Society, Carlton House Terrace on October 26, 2023 in London, England. Peter Nicholls/Pool via REUTERS
  • “Can artificial intelligence be effectively regulated?”

    “Is it possible to effectively govern artificial intelligence? According to Jimmy Wales, the founder of Wikipedia, believing in such control is comparable to engaging in ‘magical thinking’.”

    “Many times, there are limited comprehension about the workings of internet by the politicians and their aids, and what can be accomplished,” according to Mr. Wales who has spent more than one day providing insight into technology and the importance of free speech to various politicians worldwide.

    “It’s like saying the United Nations should regulate [the image editing application] Photoshop or something”. He contends that it would make no sense.

    This summer, it became a fiery point of debate that if Ai was to be regulated, in what way? It was then when the United Nations secretary general for the first time called for a UN security council meeting to deliberate on the possible threats coming from artificial intelligence.

    Speaking in regard to everything from AI-powered cyber attacks, to the risk of malfunctioning AI, how AI can spread misinformation, and even the interaction between AI and nuclear weapons, Mr Guterres said: If no action is taken to mitigate these risks then we fail in our duty towards present day’s generation as well as the generations yet to come.

    Mr Guterres continues to set up a panel of inquiry of the UN on the nature of the required global regulation. Accordingly, it would be called the High-Level Advisory Body for Artificial Intelligence made up of “current and past government experts, industry representatives, civil society actors, and academics”.

    The agency will issue preliminary results by this year’s end. At the same time, last week some of the American technocrats like Elon Musk and Mark Zuckerberg – the head of Meta – met US legislators in Washington to debate the issue of developing laws on AI systems.

    Nevertheless, a number of AI insiders doubt the viability of this globally. One of these people is Pierre Haren, who has been studying AI for 45 years.

    With over seven years’ experience in computing giant IBM and leading a crew that helped install Watson “super computer technology” for their clients. A user is able to ask questions at Watson that has been debuted in 2010 and is among the first AI types.

    In spite of his education and experience background, he says that he was really amazed by the speed with which chat GPT as well as other generative AI applications appeared in the market since last year.

    In simple terms, generative AI is AI that can rapidly generate new content – including text, pictures, music and video. It will take one idea from one context and it will be applying into a new scenario.

    Such ability is human-like as Mr Haren says. “This is not as good as a parrot which will repeat whatever you input in to it,” he adds. “It’s making high-level analogies.”

    Thus, what kind of rule will restrict this AI going amok? No we can’t according to Mr Haren he says some counties won’t take these agreements.

    “There are countries in this world who are not cooperating, such as North Korean and Iran,” he states. They cannot be expected to have regulation in AI.

    Who will believe that Iran can try to look for the way how to destroy Israel to comply with AI regulations?

    So what are the guidelines for designing a set of instructions of controlling our AI rogue? Indeed, Mr. Haaran said we could not since some of the countries will not join the programmes.

    He says that there are countries such as North Korea, Iran, and us in a globalised world. Their understanding of AI is that it will be unrecognizable under their regulation.

    Regulation pie in the sky for non-cooperative actors!! What would you imagine Iran attempting to find ways of destroying Israel and keeping an eye on AI?

    The “AI for Good” program was established by Mr. Reinhard Scholl, a physicist. The focus is on locating and implementing feasible AI tools that can support the accomplishment of UN’s Sustainable Development Goals. These encompass such things as ending poverty, eliminating hunger, ensuring clean drinking waters to all citizens among others.

    AI For Good is a series of conferences that was initially born as an annual conference in 2017, but now it’s a routine calendar of webinars dedicated to each aspect of AI.

    AI for Good has struck a chord with more than 20,000 followers and there is obviously an appetite for “positive” AI. However, that does not mean that Mr Scholl is optimistic.

    “Yes” — he says it would be naive not to regulate AI, just like cars and toy makers would.

    He fears the potential of ill-intended individuals using AI to acquire dangerous capabilities, since the latter can act as an entry point to the acquisition process.

    He says, “A physicist can show you how to construct a nuclear bomb on paper, but in real life, it is rather difficult”. However, some people who use AI to design biological weapon do not have to be knowledgeable about such things.”

    “And therefore, somebody would use AI to make something very big damage.”

    However, what should the structure of a future UN-related regulatory body be on AI? Another suggestion is that it emulates ICAO, an international body supervising global security in air travels. This has 193 member nations.

    One such AI expert is Robert Opp, who supports establishing a panel that looks like the ICAO. The UN Development Programme has a Chief Digital Officer called Mr Opp.

    This organization aims at assisting countries in achieving economic prosperity and stamping out poverty. He seeks avenues for leveraging technology to amplify the organisation’s impact.

    This would involve using AI to do rapid checks on satellite photographs of poor farmlands. For instance, Mr Opp claims that his objective is not to restrict the possibility for generative AI development among the poor so as to grow their businesses.

    Yet, he also considers that AI can be a threat. AI governance is urgent.

    “Utterly misguided” think Wikipedia’s Mr. Wales – even urgent or not.

    He thinks the international authorities have hugely committed blunder by exaggerating the role played by tech gaints like Google in the tsunami wave of AI products. Mr Wales further notes that nobody has powers over individuals’ willingness to utilize AI technology even with all noble intentions.

    The programmer goes on by stating that numerous number have been programming outside the realm of the tech companies’ boundaries and accessing baseline programs that exist openly in the net. This means that there are in excess of 10, 000 individual developers developing upon these innovations; regulation will never happen.

    One example of an AI expert supporting the creation of such a body is that of Robert Opp. Currently, Mr Opp is the chief digital officer of the UN Development Programme.

    It assists governments in their missions of fighting poverty and promoting prosperity. In his work, he seeks to help technologically make the organisation’s reach greater.

    It entails using AI to speedily review satellite maps of farms located in poor regions. According to him, generative AI capabilities should not be hampered, nor will the ability to help the poor build businesses be denied.

    Nonetheless, he admits that AI could also have the negative outcome. Determining how AI should be governed is urgent.

    According to Mr Wales from Wikipedia, whatever the urgency, a wrong approach is totally wrong for the United Nations.

    He thinks that the international organizations make very serious mistakes exaggerating their role in the collapse of artificial intelligence applications that is called today and blaming them for that. According to Mr Wales, no matter how much goodwill a developer may possess; these same independent developers will always make use of AI.

    According to him beyond the frontiers of the tech giants, there are numerous programmers who do not have much money but they are using freely available AI-Software where baseline code is available on the Internet. There are thousands of companies that are trying to regulate these entities but have little success because there are tens of thousands of individual developers who are building on these innovations. Never will they be regulated.”

  • “Will Rishi Sunak’s significant summit rescue us from the AI dystopia?”

    “Will Rishi Sunak’s significant summit rescue us from the AI dystopia?”

    “If you’re uncertain about your stance on artificial intelligence, you’re in good company.”

    Is it meant to save mankind or kill off our world’s existence? It is that much a bet, and the verdict is yet uncertain, even amongst top specialists across the globe.

    Due to this reason, some AI creators have called for development of AI to slow down or stop entirely as the growth is too fast. Some say this will curtail the abilities of the tech to create formulations for new drugs as well as contribute to environmental solutions related to climate change.

    Next week, about one hundred of world leaders, IT business representatives, scientists and artificial intelligence specialists will meet in Bletchley (Great Britain). They act as discussants into ways of achieving the highest benefits of these potent resources using minimal risks associated with it.

    Will it give salvation to the mankind or doom us? Such stakes are very high and the jury is yet, even on the part of the world’s leading experts.

    There is a group that is asking to slow down or halt the development of AI, since they claim that it is developing too quickly. Yet some contend that such an approach is merely an attempt that would inhibit the tech from achieving incredible feats, such as formulating algorithms for new drugs and working out how to address issues relating to climate change.

    About 100 world leaders including top tech executives, professors, and AI researchers, will converge to a conference next week at Bletchley Park. They were set up to participate in talks on the ways of taking full advantage of this potent tool while minimizing any potential dangers.

    AI safety summit is the forthcoming UK’s event that focuses on the extreme risks like superintelligence and the intelligence explosion. This relates to what in common terms are referred to as “frontier AI” that does not exist yet albeit in such high speed the AI is evolving.

    The critics, however, argue that the major concerns surrounding the meeting should be concentrated on the immediate problems of AI such as its energy consumption as well as the effect it is making on job creation.

    A report published by the UK government last week highlights some disturbing possible dangers, such as bioterrorism, cyber-attacks, self-governing AI, and enhanced sharing of child sexual abuse deepfakes.

    Despite such frightening background, Prime Minister Rishi Sunak called upon them not “to lose any sleep”. There is an objective, but an ambitious one. He hopes that he will be able to make the UK world number on AI safety.

    However, when the summit was announced and presented as the world’s first, many people raised eyebrows. If the world’s high command would venture into a distant, leafy region of Britain in the harsh winter and near the USA’s Thanksgiving, it is highly unlikely.

    There is no official list of invited guests. It is quite evident by now that the United States tech giants are going to do impressively. High-level executives though, not necessarily at the CEO level.

    Commercial sector is full of enthusiasm for summit. Stability AI CEO Emad Mostaque said that the US-UK AI summit would be a once in generation chance for UK to unseal its AI superpowers status.

    “Our position is that we should urge the government with other policy makers for the support of AI safety in all ecosystems like corporate labs and researchers to keep Britain safe and competitive,” he waxed lyrical.

    They should play a very crucial role in the discussion, because they lead in production of the AI systems. However, it is as if these were discussions that they are already conducting on their own. Thought diversity is critical here.

    The world- leader contingent can be somewhat of a hodge-podge. Vice President of United States Kamala Harris will be there, but not their Prime Minister, Justin Trudeau. However, Ursula von der Leyen, president of the European Commission will attend but not Olaf Scholz the German Chancellor. Controversially, China has been invited to take part in a process that it does not have a sound relationship with western nations. However, it remains the strongest tecknation.

    The other interesting fact is that United Nations Secretary-General, Antonio Guterres is also travelling – which indicates a need for a global body to have some regulation over AI technologies.

    However, some experts are worried that the summit is tackling issues in the wrong order. They contend that the probability of extreme doomsday scenarios is relatively low, and rather more pressing challenges that exist on their doorstep will be much more unsettling to many individuals.

    As per Prof Gina Neff that heads an AI Centre in University of Cambridge, “our concerns include; what will happen to our jobs, news, means of communication, people, communities and ultimately our planet”.

    Four years is all it would take for the computing infrastructure necessary to power the AI industry to consume as much energy as what The Netherlands uses. The summit will not discuss this.

    Additionally, we are aware that AI has begun disrupting roles. I had a friend operating in a local marketing organization. Five copywriters – now just one who edits what ChatGPT produces. AI assisted benefits’ claim process in the department of work and pensions.

    But where does it get its data for training, then? commercial companies that own them keep these details a secret. However arguably, AI can just be said to be as good as what it comprehends about itself. Bias and discrimination have infiltrated most of the tools used by people today including the ready made ones.

    However, these problems will appear at the summit although they will not constitute its essence.

  • The chip maker that became an AI superpower…

    Shares in computer chip designer Nvidia have soared over the past week, taking the company’s valuation above the one trillion dollar mark.

    GETTY IMAGES.

    Nvidia was known for its graphics processing computer chips.

    That’s what being a trillion dollar company means – it connects with Apple, Amazon, Alphabet, and Microsoft in one club.

    This happened following the announcement of its second quarter earnings which came late in the night on Wednesday. The company’s spokesperson said that its chip production was being raised in response “to surging demand”.

    Today, Nvidia leads this sector of selling AI chips.

    That area of interest became a madness after ChatGPT became available publicly for the first time, spreading even beyond the field of technology.

    Starting from speech assistance and moving on through software coding and cookery, it is clear that ChatGPT is by far one of the most widespread applications of AI.

    The company has added itself to a unique club of 5 US billion dollar companies such as technology giants Apple, Amazon, Alphabet, and Microsoft.

    This happened after it announced its most recent quarterly figures that came out very late on a Wednesday. The corporation stated that it would increase chip manufacturing in light of “the rising demand”.

    Now, Nvidia rules the market of artificial intelligence (AI) chips.

    After release of ChatGBT last time November, interest for that sector reached madness and that was beyond only technologists.

    Chat GPT, one of the new applications of AI and one that is extremely popular, assists in giving speeches, coding, cooking and others.

    However, such could only happen if some strong computer hardware existed — in particular Nvidia chips originating from California.

    Most AI applications are supported by Nviidia hardware which was originally known as a company specialized in manufacturing graphics processing units for computer games.

    “Says Alan Priestley, a semiconductor industry analyst at Gartner, “This is the leading technology player for the new thing called artificial intelligence”.

    According to Dan Hutcheson, a TechInsights’ analyst, “NVIDIA is to AI as Intel was to PCs.”

    ChatGPT is a product that was taught utilizing 10,000 GPUs that were combined together and connected to a supercomputer, owned by Microsoft

    According to Ian Buck, general manager and vice president of accelerated computing at Nvidia, “IT IS ONE OF MANY SUPERCOMPUTERS – SOME PUBLIC AND SOME NOT – THAT WERE BUILT USING NVDIA GPUS FOR A RAN

    The recent research made by CB Insight noted that Nvidia holds about 95% GPU market share dedicated for machine learning.

    The chips are approximately of $10,000 (£8,000), but its newest and most powerful models sell at much higher prices.

    Now, where was Nvidia that it became the most important player on the AI revolution?

    In brief, a wager with courage on its own technology and a pinch of luck in timing.

    It is one of many such supercomputers; some public and private that are equipped with Nvidia GPUs for both scientific and AI purpose.

    According to one recent report by CB Insights, over 95 percent of the GPU market for machine learning is controlled by Nvidia.

    The company’s AI chips cost approximately $10,000 (£8,000). However, the latest and the most powerful version is much higher in price.

    Then, what made Nvidia the leader of an AI revolution?

    To sum up; bold betting of self-technology and little bit of time luckiness.

    One of its founders was Jensen Huang who is currently the CEO of nvidiaback in 1903. Secondly, Nvidia’s interest was in improving graphics for games and other purposes.

    It came up with GPUs in 1999 to enhance computer’s image display.

    GPUs are excellent when it comes to processing lots of these small jobs, otherwise known as parallel processing, which includes millions of pixels in this current instance.

    Researchers from Stanford University found out in 2006 that aside from the gaming purpose, the GPUs can also be utilized as a means of speeding up the math’s functions which are beyond the capability of the common processing chip.

    At that point, Mr Huang made a critical move towards what AI has attained now.

    Instead, he used his company’s resources into designing a programming tool that unleashed the potential of GPUs not only for graphics but also for other applications that take advantage of parallel processing.

    The company was among various other companies that were in the process of making their respective computer chip. Players only did not require this capacity and might not have known that they had it while the researchers offered another form of high performance computing at consumer hardware.

    Those were those capabilities responsible for triggering initial successes of contemporary AI.

    Alexnet, an AI that can classify images, was introduced in 2012. In training Alexnet, only two of Nvidia’s programmable GPUs were utilized.

    Instead of months that it might take for hundreds of the typical processing chips, the training process lasted only a few days.

    Word also quickly spread out to the computer scientists who began purchasing these cards in order to try applying them for running a brand new class of job.

    According to Mr Buck, “We were caught by AI”.

    Nvidia made an added advantage through pouring investment on different types of GPUs specialized for AI and other related softer wares which simplify its use.

    Ten years down the line plus billions of dollars in the making, ChatGPT was born- an artificial intelligent entity so human sounding in its responses.
    Deep fakes of Tom Cruise by Metaphysic in 2021.
    Photorealistic videos of celebrities and others are produced by an American AI start-up named Metaphysic. The Tom Cruise deep fakes it made that year had everyone talking about it.

    It runs and trains its models on hundreds of Nvidia GPUs; some bought at NVIDIA and other acquired via cloud computing services.

    According to Tom Graham, its co-founder and CEO, “Nvidia is the only option out there that we could use.” This is “way out in front”.

    Nonetheless while Nivdia seems to be safe at least in short time span, the long time prediction is hard to make. According to Kevin Krewell another industry analyst at TIRIAS research says, “Nvidia has a bull’s eye on it that every competitor is aiming to shoot.”

    Some competition comes from other major semiconductor companies. Although, other two companies also know much as CPU manufacture. AMD is a well-known producer, making dedicated GPUs for AI purposes (Intel entered this market pretty lately).

    Google’s TPUs are employed for both results on search and some specific machine learning functions, whereas one for the training of AI models in Amazon’s case.

    In addition, there are rumors of a Microsoft’s AI chip, as well as Meta’s own AI chip development project.

  • Nvidia says…US orders immediate halt to some AI chip exports to China.

    Tech giant Nvidia says the US has told it to stop shipping some of its advanced artificial intelligence chips to China immediately.

    EPA-EFE/REX/SHUTTERSTOCK

    However, the restrictions were to come into force 30 days after 17th of October.

    Then, the President Joe’ Bidem’ s administration revealed actions to stop states such as China, Iran, and Russia from purchasing first-class AI chips provided by Nvidia and other manufacturers.’)

    The timelines’ move forward was however not explained by Nvidia.

    NVIDIA said in a statement addressed to US SEC, “the Government has effectively ordered such licenses with immediate effect” however it further noted that “the strength of global demand for our products is unlikely to result in any material short term impact”.

    The new rules, however, prohibit shipments of the latest generation of Nvidia’s AI chips, initially developed for the Chinese market in compliance with export quotas imposed previously.

    The latest maneuver in the tech war between China and United States is the accelerated implementation of the Chinese tariffs.

    They were supposed to come into force thirty days from October, 17.

    At that time, the Obama administration unveiled the plans for the US government to prevent various nations like China, Iran and Russia from purchasing the advanced AI chips produced by Nvidia & co

    Nvidia did not give reasons for pushing up the timeline.

    NVIDIA commented to the SEC that the US government imposed these curbs “immediately”, however, NVIDIA also stressed that despite the high global demand for its goods, it is anticipated that this timing will exert no substantial impact on the company’s future short-run earnings.

    However, Nvidia’s latest constraints do not permit exports of its new AI chips engineered to meet earlier export rules.

    This latest step in the on-going US Technology Battle with China is a rapid ramp up of US curbs.

    While Chinese authorities are yet to comment on Nvidia’s announcement, it responded in its backlash to Biden’s decision of imposing new restrictions on the export of more advanced chips that was announced last week.

    According to the foreign ministry, it goes against the principles of a market driven economy and a free competition.

    This step was perceived as a way of addressing the gaps and loopholes found during the first wave of chip controls made in October 2017.

    The US claimed they were meant to stop China from receiving highly sophisticated technologies it would utilize more in developing its military, particularly in areas such as AI.

    The share price of Nvidia’s stock increased over three-fold because the company produced AI chips that were in high demand.

    The firm became part of the select group of trillionaire stocks in May, joining tech titans Apple, Amazon, Alphabet and MS.

    Nvidia, a company based in California, is now in control of the chip demand for AI systems.

    There have been no statements issued by chip giant, Advanced Micro Devices, which is another major supplier of AI chips in China. It failed to reply immediately to a BBC enquiry about it.

    When the BBC contacted the US Department of Commerce for comments, it declined to do so.

    The announcement attracted little attention outside China until Nvidia also came out publicly with its denouncement of the recently imposed new restrictions on advanced chip exports by the Biden administration.

    Country’s foreign ministry noted that these tariffs “violate the principles of a market economy and fair competition”.

    This is why the recent restriction is viewed as a follow-up after the first wave of clampdown on chips last year.

    This was when the U.S. claimed that those measures were meant to ensure China is unable to get advanced technology that can be weaponized, particularly in the field of AI.

    The skyrocketing demand for Nvidia’s AI chips has boosted the company’s share price by over 300%, placing it among the world’s top corporations.

    The firm became part of the elite companies with a stock market capitalisation exceeding one trillion dollars mark only in May next year joining Apple, Amazon, Alphabet and Microsoft.

    Chips for artificial intelligence (AI) are now dominated by companies based in California, led by Nvidia.

    No announcement was made by chip giant Advanced Micro Devices (AMD) that is also an artificial intelligence (AI) suppliers’ China about the expedited export restrictions. It refused to comment on a BBC request.

    When approached by the BBC, the US Department of Commerce chose not to comment on the Nvidia’s statement.

  • National Star College students to gain independence using AI technology…

    National Star College students to gain independence using AI technology…

    A new £6.2m residential building equipped with the latest technology is due to open at a college for students with disabilities.

    An interactive fridge allows students to see inside without opening the door by using the command “tell me what is in my fridge”

    pic credit by BBC News

    The smart house will be present in the accommodations at National Star College and voice-control facilities are involved in this process.

    This will give students an opportunity to adjust AI to fit individual demands.

    Later, disability campaigners Jack Thorne and Rachel Mason will open it.

    The institution which operates in Ullendwood, near Cheltenham offers education and rehabilitation for children with various health defects. It is believed that technology will enable the students to have more independence and better chances of getting adapted to life after graduation.

    This 13 roomed ‘Building a Brighter Future’ building is a one floor facility fitted with tracker system hoist for lifting the elderly and artificial intelligence features like talking fridges controlled by voice commands.

    “This is an opportunity for us to introduce these devices to the students in a friendly environment at National Star College,” explained Maizie Morgan, the Assistive Technology Technician.

    She argued thus, “The objective is for future as well as existing students to leverage this technology, see what is available out there, and eventually incorporate it in their bedrooms before leaving college”.

    According to principal Simon Welch, the technology has been modified in order to provide students with necessary assistance on a personal basis.

    We take into account the young people with disabilities and what matters most for them.’

    According to him, “This technology is not that innovative but rather how we work with people,” he added.

    The student, Jaspar Tomlinson, got a chance to try out the software before its launching.

    He is a nonverbal though can issue orders through smart machines based on an eye controlled electronic communicator.

    A single action word after that, can control devices and appliances in the rooms.

    He says, “It is good because it gives me confidence until I leave college.”

    Peter Horne, National Star deputy chief executive, said: “The development of this new accommodation will boost the quality of lives for the disabled young people with multiple physical and learning disabilities and will offer stimulating environments to dwell in, study and enjoy their lives.”

    According to him, “This technology is not that innovative but rather how we work with people,” he added.

    The student, Jaspar Tomlinson, got a chance to try out the software before its launching.

    He is a nonverbal though can issue orders through smart machines based on an eye controlled electronic communicator.

    A single action word after that, can control devices and appliances in the rooms.

    He says, “It is good because it gives me confidence until I leave college.”

    Peter Horne, National Star deputy chief executive, said: “The development of this new accommodation will boost the quality of lives for the disabled young people with multiple physical and learning disabilities and will offer stimulating environments to dwell in, study and enjoy their lives.”

    Jaspar Tomlinson is able to control devices in his accommodation using his electronic communicator.

    Student Jaspar Tomlinson had a chance to sample the app prior to the official launch.

    He does not talk, however he uses eye control of this electronic communicator to give commands to the smart devices.

    Thus, by simply uttering an action word users can control devices and appliance placed in the rooms.

    He said, “It’s cool because its builds up my self-esteem as I get ready to leave college.”

    Peter Horne, National Star deputy chief executive, said: “The new residential facility which will provide enhanced support for young physically and mentally impaired individuals who will have the opportunity of living, studying and recreation in an effective environment.”

  • Paedophiles using AI to turn Celebrities into kids…

    Paedophiles are using artificial intelligence (AI) to create images of Singer and Film Stars as children.

    Pic Credit By Joe Tidy Cyber correspondent.

    IWF stated that pictures of female celebrity remade to look like a child are distributed among pedophiles.

    Images of child actors are also being tampered with in one dark web forum, according to the said charity.

    Bespoke image generators can create hundreds of photos of actual victims of child sexual abuse.

    More recently, these kinds of details come from the IWF’s report on the increasing danger, and attempts to draw attention to this hazard.

    Ever since these powerful image generation systems became available in the public domain, researchers have cautioned that they could be used fraudulently to create explicit images.

    In May, Home Secretary Suella Braverman and US Homeland Security Secretary Alejandro Mayorkas released a joint statement warning against despicable AI-generated images that show adults engaging in sexual acts with minors.

    Predators are purportedly sharing images of a renowned female singer, imagined as a child.

    The organization asserts that in another dark web forum, pictures of the children actors are being edited to appear sexually.;

    Additionally, bespoke image generators are used to create hundreds of new photographs with actual subjects who are victims of child sexual abuse.

    It also gives the details gleaned from the IWF’s most recent report addressing the escalating predicament as a means of raising consciousness for the perils associated with pedophiles utilizing AI systems that convert simple commands into pictures.

    Once these powerful image generation systems became available to the public, researchers have cautioned about their abuse towards creating lewd pictures.

    May’s Home Secretary Suella Braverman and US Homeland Security Secretary Alejandro Mayorkas released a joint pledge to combat against “disgraceful artificial intelligence generated images of child sexual abuse perpetrated by paedophiles”.

    In its report, the IWF also outlines how more than 2,800 synthetic images that violate UK laws were identified by researchers who spent a whole month logging all artificial imagery on just one darknet site.

    According to analysts, a recent trend has seen predators taking just one photo of widely known young victims on child abuse scenes and using it as a basis for manufacturing multiple other such pictures in various sexually perverted environments.

    Into one folder were over 501 pictures of some little girl who was roughly 9-10 years old and sexually assaulted. Other’s were able to create additional images of “predators” by sharing a highly tuned AI model file into the same folder.

    Some of their imagery is so real; even adults would mistake it for real child pornography, e.g., images of even celebrities portrayed pretending to be children.

    They noticed photos primarily of younger looking female singers or actresses, which had been altered with the imaging software package to make them appear as kids.

    No names or specifics of those who were involved in the attack were mentioned in the report.

    The organization stated that it was publishing the study in order to have the problem placed on the agenda during the British AI Summit next week at Bletchley Park.

    Within one month, the IWF looked into 11,108 images generated by an artificial intelligence that were circulated in a darkweb child abuse forum.

    Out of this number, 2,978 were identified and certified that meet the definition of “images which fail the test of UK law”, meaning they showed child sexual abuse
    Out of the total 2857 pictures, more than twenty percent were rated as type A, which is the worst category.
    The majority of these photos comprised about 54% (1,372), which involved primary school-aged persons (seven to ten years).
    Besides this, there were 143 images displaying children between the ages of three and six years and two displaying babies (less than two year).

    The fears that the IWF gave in June are now the reality as predators are already taking advantage of AI to create obscene pictures involving children.

    This was stated by Susie Hargreaves,, the CEO of IWF,, our worst nightmares have happened.

    Three months ago, we said AI imagery may soon appear as genuine photographs depicting abuse of children, and that these images are going to appear in abundance. This is already a fact.

    AI-based images remain harmful as observed in the IWF report. The images may not actually harm children directly but they legitimize and normalize predators’ behavior while wasting police manpower searching for the non-existent children.

    New forms of offences are also being investigated in some cases resulting into new complexities for enforcing agencies.

    One case where this happened was when another organization called the IWF located several pictures of at least two girls who were depicted in Category A sexual abuse scene based on their manipulated photos from an innocent photoshoot at non-nude modelling agency.

    In truth, they are being prosecuted for category A offences that did not occur at all.

    The IWF has now stated that its fears of such concerns turning into a reality have become true in June.

    In this respect, as indicated by Susie Hargreaves, the CEO of IWF, “our worst nightmares have become reality”.

    “As warned by us earlier in the year, the day would come when artificial intelligence imagery would soon resemble authentic photos of children being sexually abused; a stage that we are currently past.”

    The IWF report states in essence that true-to-life harm is a reality when it comes to AI images. However, these materials do not harm children directly during their production. In addition, photos of fictional victims become a normal part of predatory behavior. It may also be that police have to spend time searching for children whom do not exist.

    In some cases even new ways of committing offense are studied, putting extra problems in the way of law enforcement agencies.

    As an illustration, the IWF identified lots of photos of two girls who were not even ten years old in which their unaltered images taken for a photoshoot at a non-nude modeling company had been altered to make them appear in category A sex offenses.

    What this means is that they’re actually victims of Category A offences that did not take place.

  • Censys lands new cash to grow its threat-detecting cybersecurity service….

    Image Credits: baranozdemir / Getty Images

    It appears that investments in cybersecurity companies are starting to come at the crossroads.

    Despite a tough summer, total venture capital investment to security start-ups grew from approximately $1.7 billion to almost $1.9 billion in Q3 according to Crunchbase, representing a small increase of 12%. That’s still down 30% annually. However, it’s just a mild indication that ventures hunger for cyber is once again on the rise.

    The reasons may include increasing expenditures on enterprise cyber security among other factors.

    As noted in an IANS Research brief, corporate IT security spending increased by about six percent from 2022 to 2023 – only marginally meaningful. Accordingly, most of the respondents pointed out to “increased risk” and “digital transformation.” That is, digitizing legacy apps, products and services.

    This helps one startup called Censys that offers insights on vital internet-connected infrastructure that are of interest to customers. Censys today revealed that it secured $50M in Series C financing and another $25M debt round, resulting in a cumulative figure of $178.1 million of funds raised.

    Censys traces back to ZMap invented by the organization’s chief scientist Durumeric in 2013. In 2015, after understanding that this program can be used for enhancing instead of monitoring the states of insecurity on the internet, Durumeric has mobilized some team to revamp and develop ZMap’s performance.

    According to Brad Brooks, the CEO of Censys, their belief remains unchanged as stated during an email interview at TechCrunch. Perfect visibility of any element that has a link with exposure to the internet is the only way through which vulnerable people and organisations can be protected against external threats.

    Thus, Censys has now a variety of mechanisms for tracking the hosts, as well as their respective security states or statuses. The customers enjoy database of vulnerabilities “enhanced by context”, as well as an environment for watching and assessing their openness-oriented assets (for example, servers and office computers).

    According to Brooks, most users have been leveraging the platform as a threat hunter tool where they hunt for and investigate potential security compromises, while also integrating the asset monitoring technology from Censys. Others use Censys to assess how effective their security measures have been so far in terms of length and intensity of risk.

    According to Brooks, “Censys utilizes leading internet intelligence to provide leaders and their teams with the data they need to stay ahead of sophisticated cyber-attacks.” They may also employ the benchmark of Census which are used in assessing a company performance over a period and creating industry standards.

    In recent times, Censys got into the generative AI wagon and created a chatbot that allows people to search their security databases using natural languages. For instance, they may query the platform by putting forth requests such as “show me all Australian locations with both FTP and HTTP”. For instance, they can use the chatbot to translate search syntax generated from a third party threat hunting platform such as Shodan and/or Zoomeye into search language for Censys can understand.

    mage Credits: baranozdemir / Getty Images

    Censys is a firm that claims to have 350,000 subscribers in its free service and above one hundred eighty (180) clients, including the FED and the department of homeland security, among others. This firm intends to use its As of now, it should be noted that by the end of 2023, Censys from Ann Arbor, MI should hire a hundred and fifty employees.

    According to Brooks, the demand for cybersecurity is still high even with economic uncertainty. In fact, there were positive winds due to major enterprises moving to cloud based solutions and relocating thousands of employees to working remotely that drove up demand and queries.

    It is not easy for Censys to sustain speed and maneuver over fierce headwinds.

    The remaining months of 2016 are threatened with a stagnant venture funding or perhaps even a decline due to issues in the Europe and the relation between US and China. The higher interest rates are reducing opportunities for new investment which then impacts startups ability to obtain money.

    Hedge funds, late stage and crossovers – investors into publicly traded and privately held corporations – have retreated from the market due to anticipated recession – in general, technology sector.

    However, on another note, there are grounds for hope.

    As noted in a recent Spiceworks report, although most businesses fear a recession in 2024, it does not mean that two-third (about 66%) of them do not plan to increase the annual budget and include allocations for cyber security in 202 Accordingly, according to IDC there will be cyber security spending of over $219 billion around the globe for this year, rising up to almost $300 billion by 2026 (IDC, 2020).

  • Twelve Labs is building models that can understand videos at a deep level through AI..

    Image Credits: boonchai wedmakawand / Getty Images

    One aspect of text generative AI. However, image-understanding AI models could open up a whole new set of opportunities.

    Take, for example, Twelve Labs. As co-founder and CEO Jae Lee states, the San Francisco-based startup teaches AI models to deal with complicated audio-video alignment issues.

    He said that “Twelve Labs was set up with a view to developing the necessary infrastructure for multi-modal video understanding; the initial undertaking being video semantic search – ‘CTRL+F for videos.” “Twelve’s vision is about building tools for creating intelligent software that can hear

    The purpose of Twelve Labs’ models is to be able to link some words in natural languages with what is occurring within a video like actions, objects as well as some background sound that has enabled developers to develop mobile applications which are able to search through videos as well as classification of scenes and extr

    According to Lee, Twelve Labs’ technology can be used in things such as inserting ads and content moderation, like distinguishing which knife video is violent or pedagogical. Lee further added that it could be used as media analytics, and auto-generated highlight reel — or blog post headline and tag from video.

    During my conversation with Lee, I enquired her whether there are any chances of bias in such models being modeled and this is because established scientific fact indicates that models exacerbate the biases within and upon which they were trained.o As an example, a video understanding model trained based on mostly clips from local news—that focuses a lot of attention on crime, using racialized and sensationalized language—may internalize its own lessons by picking up racism and sexism along the way.

    One example of it would be text-generating AI. However, next-generation computer vision systems will open up for a whole world of possibilities.

    Take, for example, Twelve Labs. According to Jae Lee, a co-founder and CEO of the San Francisco-based start-up, the company trains AI models to “crack hard video-language alignment puzzles”.

    As explained by Lee to TechCrunch via email,” Twelve labs was established…for constructing a multimodal video comprehension platform with the initial aim being semantic search — or control F for videos.” The mission is for experts to be able to develop programs whose senses are similar

    The twelve lab’s models, try to map out some natural language with what is going on in a video such as; movements, objects and sound effects surrounding, which allow developers to make applications capable of searching inside the videos, splitting the videos into chunks or chapters, summing up the videos,

    Twelve labs’ technology can help in activities such as adding ads or moderating violent knife videos. According to Lee, it may also be applied in media analytics and to create highlight reels/blog post headlines or tags based on videos.

    Thus, when I asked Lee whether there is a potential for any kind of bias in these models, it is worth noting that it is long-standing science practice and theory that models tend to enhance the pre-existing data biases. As a case in point, teaching a video cognition model primarily with clips of local news – that frequently devotes much time to the racialized and sensational reporting about crime – may lead the model to incorporate both gender bias and racial discrimination.

    According to Lee, Twelve Labs tries to adhere to internal biases and “fairness” for the models’ performance before their release will happen. He also asserts that they plan to publish model-ethics-related benchmarks and datasets at one point. However, apart from that point there was nothing else to talk about.“As to the difference between our product and the large language models such as Chat GPT, our model is specifically designed and constructed to make sense of all things in video – integrating visual signals, audio, speech elements which together give.

    MUM is another multimodal model that Google is working on, which its incorporating into video suggestions across Google Search as well as YouTube. In addition to MUM, Google – along with Microsoft and Amazon – provide API-based, AI-enabled functionality for object, place and action recognition in videos with frame-level extraction of rich metadata.

    However, Lee maintains that Twelve Labs is segmented by its superior models and the automaton function of the platform where users can refine the platforms generic model with their own data for more “domain-specific” video analysis.

    Mock-up of API for fine tuning the model to work better with salad-related content.

    Today Twelve Labs is launching its most recent “multimodal model”, called “Pegasus-1”; this model understands different types of prompts referring to full-video analysis. Pegasus-1 can also provide an in-depth description of a specific video or simply some highlights and timestamp.

    According to Lee “Most business use cases require understanding video material in much more complex ways than the limited and simplistic capabilities of conventional video AI models can provide. The only way enterprise organization can get a human level understanding without having to analyse video material in person is to leverage powerful multimodal.

    Lee claims that since launching in private beta back in early May, Twelve Labs currently has over 17k developers registered. Currently, the company is working together with an indeterminate amount of organizations in sectors such as sports, media, e-learning, entertainment and security. Moreover, recently the organization established partnership with NFL.

    Speaking of Twelve Labs, which is still raising money – this is a must-have component in any start-up. The firm said it has just closed a USD 10M strategic financing deal from Nvideo, Intell, an Samsung Next, taking its total up to USD 27M.

    According to Lee, this new investment targets strategic partners who can quicken the company in research (Compute), products and distribution. “According to research from our lab, it is a fuel for the continued video-understanding innovation which allows us to introduce the most potent models to end users regardless of potential uses. We push the industry boundaries at a time when enterprises are doing amazing stuff,” it says.