Category: Artificial Intelligence

  • Airbnb is using artificial intelligence to assist in preventing house parties.

    How would you react if you rented out your home for a few nights, only to come back and discover it had been used to host a wild house party?

    If I saw the damage that had been caused on my property, I would either be pissed off, angry or both.

    Most people have witnessed them globally these days, with particular reference to Covid-19 period. After closing pubs, bars and the most of the nightclubs, youth, especially youngsters were looking for similar places to spend their off-days – dancing, drinking a lot and socializing.

    This prompted a backlash from the short-term rental giant, Airbnb, which declared a “ban on global parties”, swearing it would take all necessary measures to avoid such situations. Such measures as prohibition of offenders from fresh booking, as well as prohibitions to persons under 2 years old without good records were implemented here.

    Recently, Airbnb said that the number of reported parties during 2020 to last year declined by 55%. However, with the battle still unfolding the US company is now playing its cards right by introducing an artificial intelligence powered system that will seek to weed out possible enemies.

    It is currently running operations all over the world. When you attempt to make a new booking on Airbnb, the artificial intelligence will immediately check if it has been established that: one, you created your account less than 6 months ago; and two, you want to stay within the same city

    Secondly, it asks about how long you are staying – one night could be problematic in itself – and whether you are going to celebrate Halloween or New Year’s party.

    You’d have the look of being astonished at the damages that had been caused to your property and would either be furious or sad or a mixture of the two feelings.

    In contemporary times, these situations are highly prevalent; they were widely mentioned within a short period of time even during the covid19 periods. Young adults specifically needed places where they could gather, dance and perhaps drink excessively with bars and the clubs closed down.

    This ignited a counter back from Airbnb short term rental giant, who declared and swore to do everything it takes to evade such behavior. For example, it restricted bookings by offenders, imposed curbs on under-25s with bad track records.

    Recently, the giant said it cut off on over half the number of reported parties, dropping the share by 55%, from 2020 to last year. However, the fight is not over, as the US company has responded by implementing an artificially intelligent software in order to remove possible disrupters.

    Currently working globally, if today while making an airbnb booking, the Ai examines such things as how long ago you have created your account, and -major warning sign- what happens, when you are seeking accommodation in the same location where you reside?

    Moreover, it will also look into how long you are planning to stay – one night can be a cause for concern especially if it takes an area with a lot of parties e.g., Halloween or New Year’s Eve time.

    “A party” could mean someone booking a room during New Year’s Eve for one night only as a local, according to Naba Banerjee, Head of Safety and Trust at AirBnB.

    Ms Banerjee states that if the AI perceives that the chance for a particular party’s reservation presents a potential problem, a denial or referral to their partner hotels’ websites can occur. […] Airbnb makes sure you feel safe about trust when renting your house, reassuring hosts and guests like this.

    One such individual is Lucy Paterson. Wen rents a one-bedroom annex at Worcestershire, with over 150 reservations for the same since renting it.

    “A plan to become an Airbnb host and that’s all I have it’s only one bedroom for the purpose of reducing parties,” she explains..” Of course, not every time everything has been perfect; however, more than 99% of my guests were terrific.

    To this end, she states that Airbnb’s newest AI application gave “her additional assurance”.

    Moving on, Ms Bannerjee believes that the AI will only grow stronger and stronger with time as it will constantly process more and more data and thus, will learn continually.

    Another large AI system is deployed in car sharing online marketplace, Turo to ensure safety for those giving others car rental services.

    Through a software known as DataRobot AI threat of theft could be spotted very fast. In addition to this, car prices are calculated depending on the capacity of the vehicle, power and speed rating. The time a driver intends to start his hiring period is also considered.

    Turo also separately incorporates artificial intelligence, which lets certain other users speak to its app, asking for a particular vehicle and date. Following up, the AI will respond, with text displayed on screen giving a tailored list of suggested cars as per the requirements. This service is offered to subscribers using the popular consumer AI system, ChatGPT-4, incorporated into Turo’s own system.

    Says Albert Mangahas, the firm’s chief data officer, “We want to make it easy to look for Turo, and that can help establish trust between us and our customers.”

    That being said, Edward McFowland III, Assistant Professor of Technology and Operating Management at Harvard Business School, argues that if properly applied, AI can be used for profiling potential problem customers. That is where the layer of AI may come in handy to reduce the friction between business as well as consumers.

    He argues nonetheless, that even a properly adjusted AI model can give rise to false negatives; like turning away a youth who aims at hiring a flat for New Year’s Eve; without intending to arrange an all-night party. “That’s also why building perfect AI-technology is very hard.”

    download

    Image credit by propertykeeper.co.uk

    Lara Bozabalian’s family’s 100 sq meters cottage in Toronto, Cananda is available for rent via Airbnb. She asserts no matter how appealing the offer is as per the AI prediction or any reasons, she trusts no rule but hers.

    I do not deal with first time customers and require that whoever vetted them before referred them to my office.

  • “Rishi Sunak acknowledges the threats and risks posed by AI while also highlighting its potential.”


    “Prime Minister Rishi Sunak has cautioned that artificial intelligence could potentially facilitate the development of chemical and biological weapons.”

    “In a worst-case scenario, Mr. Sunak warned that society might lose all control over AI, rendering it impossible to deactivate. While there’s ongoing debate about the extent of potential harm, he stressed the importance of not ignoring AI risks.

    In a speech aimed at positioning the UK as a global AI leader, the Prime Minister highlighted that AI is already generating employment opportunities. He acknowledged that AI’s advancement would stimulate economic growth and enhance productivity, even though it would impact the job market.

    During the Thursday morning address, the Prime Minister outlined both the capabilities and potential risks associated with AI, including concerns like cyber attacks, fraud, and child sexual abuse, as detailed in a government report.

    Mr. Sunak emphasized that one of the report’s concerns was the potential use of AI by terrorist groups to amplify fear and disruption on a larger scale. He stressed the importance of making safeguarding against the threat of human extinction from AI a global priority. However, he also added, ‘There is no need for people to lose sleep over this at the moment, and I don’t want to be alarmist.’”

    He expressed his “optimism” regarding AI’s potential to enhance people’s lives.

    A more immediate concern for many individuals is the impact AI is currently having on employment.

    Mr. Sunak highlighted how AI tools are effectively handling administrative tasks, such as contract preparation and decision-making, which were traditionally human roles.

    He emphasized that he believed education was the key to equipping people for the evolving job market, noting that technology has always transformed how people earn a living.

    For example, automation has already altered the nature of work in factories and warehouses, but it hasn’t completely eliminated human involvement.

    The Prime Minister argued against the simplistic notion that artificial intelligence would “take people’s jobs,” instead encouraging the public to view the technology as a “co-pilot” in their daily work activities.

    Numerous reports, including declassified information from the UK intelligence community, have outlined a series of concerns regarding the potential threats posed by AI in the coming two years.

    According to the “Safety and Security Risks of Generative Artificial Intelligence to 2025” report by the government, AI could be used to:

    1. Strengthen terrorist capabilities in areas such as propaganda, radicalization, recruitment, funding, weapons development, and attack planning.
    2. Increase occurrences of fraud, impersonation, ransomware, currency theft, data harvesting, and voice cloning.
    3. Amplify the distribution of child sexual abuse imagery.
    4. Plan and execute cyberattacks.
    5. Undermine trust in information and employ ‘deepfakes’ to influence public discourse.
    6. Gather knowledge on physical attacks carried out by non-state violent actors, including the use of chemical, biological, and radiological weapons.

    Opinions among experts are divided when it comes to the AI threat, as previous concerns about emerging technologies have not always materialized to the extent predicted.

    Rashik Parmar, CEO of The Chartered Institute for IT, emphasized that AI will not evolve like the fictional character “The Terminator.” Instead, with the right precautions, it can become a trusted companion in our lives from early education to retirement.

    In his speech, Mr. Sunak stressed that the UK would not hastily impose regulations on AI due to the complexity of regulating something not fully understood. He advocated a balanced approach that encourages innovation while maintaining proportionality.

    Mr. Sunak aims to position the UK as a global leader in AI safety, despite not being able to compete with major players like the US and China in terms of resources and tech giants.

    Most of the Western AI developers are currently cooperating, but they remain secretive about the data their tools use and their underlying processes.

    The UK faces the challenge of persuading these firms to adopt a more transparent approach rather than, as the Prime Minister put it, “marking their own homework.”

    Professor Carissa Veliz, an associate professor in philosophy at the University of Oxford, pointed out that the UK has been historically reluctant to regulate AI, making Sunak’s statement about the UK’s leadership in AI safety interesting. She added that regulation often drives impressive and crucial innovations.

    The Labour Party criticized the government for not providing concrete proposals on how it plans to regulate the most powerful AI models and called for action to ensure public protection.

    The UK is hosting a two-day AI safety summit at Bletchley Park in Buckinghamshire, with China expected to attend. This decision has sparked criticism due to tense relations between the two countries. Former Prime Minister Liz Truss has requested the rescission of China’s invitation, emphasizing the importance of working with allies.

    However, Mr. Sunak defended the decision, asserting that a serious strategy for AI requires engagement with all leading AI powers worldwide.

    The summit will bring together world leaders, tech companies, scientists, and academics to discuss emerging technology. Professor Gina Neff, Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, criticized the summit’s focus, arguing that the most pressing concerns, such as digital skills and the responsible use of powerful AI tools, are not adequately addressed.

    She emphasized that this poses risks for individuals, communities, and the planet.

    EIN74XMKVBOKXMRS2NSGG5OWYQ
    British Prime Minister Rishi Sunak delivers a speech on AI at Royal Society, Carlton House Terrace on October 26, 2023 in London, England. Peter Nicholls/Pool via REUTERS
  • “Can artificial intelligence be effectively regulated?”

    “Is it possible to effectively govern artificial intelligence? According to Jimmy Wales, the founder of Wikipedia, believing in such control is comparable to engaging in ‘magical thinking’.”

    “Many times, there are limited comprehension about the workings of internet by the politicians and their aids, and what can be accomplished,” according to Mr. Wales who has spent more than one day providing insight into technology and the importance of free speech to various politicians worldwide.

    “It’s like saying the United Nations should regulate [the image editing application] Photoshop or something”. He contends that it would make no sense.

    This summer, it became a fiery point of debate that if Ai was to be regulated, in what way? It was then when the United Nations secretary general for the first time called for a UN security council meeting to deliberate on the possible threats coming from artificial intelligence.

    Speaking in regard to everything from AI-powered cyber attacks, to the risk of malfunctioning AI, how AI can spread misinformation, and even the interaction between AI and nuclear weapons, Mr Guterres said: If no action is taken to mitigate these risks then we fail in our duty towards present day’s generation as well as the generations yet to come.

    Mr Guterres continues to set up a panel of inquiry of the UN on the nature of the required global regulation. Accordingly, it would be called the High-Level Advisory Body for Artificial Intelligence made up of “current and past government experts, industry representatives, civil society actors, and academics”.

    The agency will issue preliminary results by this year’s end. At the same time, last week some of the American technocrats like Elon Musk and Mark Zuckerberg – the head of Meta – met US legislators in Washington to debate the issue of developing laws on AI systems.

    Nevertheless, a number of AI insiders doubt the viability of this globally. One of these people is Pierre Haren, who has been studying AI for 45 years.

    With over seven years’ experience in computing giant IBM and leading a crew that helped install Watson “super computer technology” for their clients. A user is able to ask questions at Watson that has been debuted in 2010 and is among the first AI types.

    In spite of his education and experience background, he says that he was really amazed by the speed with which chat GPT as well as other generative AI applications appeared in the market since last year.

    In simple terms, generative AI is AI that can rapidly generate new content – including text, pictures, music and video. It will take one idea from one context and it will be applying into a new scenario.

    Such ability is human-like as Mr Haren says. “This is not as good as a parrot which will repeat whatever you input in to it,” he adds. “It’s making high-level analogies.”

    Thus, what kind of rule will restrict this AI going amok? No we can’t according to Mr Haren he says some counties won’t take these agreements.

    “There are countries in this world who are not cooperating, such as North Korean and Iran,” he states. They cannot be expected to have regulation in AI.

    Who will believe that Iran can try to look for the way how to destroy Israel to comply with AI regulations?

    So what are the guidelines for designing a set of instructions of controlling our AI rogue? Indeed, Mr. Haaran said we could not since some of the countries will not join the programmes.

    He says that there are countries such as North Korea, Iran, and us in a globalised world. Their understanding of AI is that it will be unrecognizable under their regulation.

    Regulation pie in the sky for non-cooperative actors!! What would you imagine Iran attempting to find ways of destroying Israel and keeping an eye on AI?

    walle 1

    The “AI for Good” program was established by Mr. Reinhard Scholl, a physicist. The focus is on locating and implementing feasible AI tools that can support the accomplishment of UN’s Sustainable Development Goals. These encompass such things as ending poverty, eliminating hunger, ensuring clean drinking waters to all citizens among others.

    AI For Good is a series of conferences that was initially born as an annual conference in 2017, but now it’s a routine calendar of webinars dedicated to each aspect of AI.

    AI for Good has struck a chord with more than 20,000 followers and there is obviously an appetite for “positive” AI. However, that does not mean that Mr Scholl is optimistic.

    “Yes” — he says it would be naive not to regulate AI, just like cars and toy makers would.

    He fears the potential of ill-intended individuals using AI to acquire dangerous capabilities, since the latter can act as an entry point to the acquisition process.

    He says, “A physicist can show you how to construct a nuclear bomb on paper, but in real life, it is rather difficult”. However, some people who use AI to design biological weapon do not have to be knowledgeable about such things.”

    “And therefore, somebody would use AI to make something very big damage.”

    However, what should the structure of a future UN-related regulatory body be on AI? Another suggestion is that it emulates ICAO, an international body supervising global security in air travels. This has 193 member nations.

    One such AI expert is Robert Opp, who supports establishing a panel that looks like the ICAO. The UN Development Programme has a Chief Digital Officer called Mr Opp.

    This organization aims at assisting countries in achieving economic prosperity and stamping out poverty. He seeks avenues for leveraging technology to amplify the organisation’s impact.

    This would involve using AI to do rapid checks on satellite photographs of poor farmlands. For instance, Mr Opp claims that his objective is not to restrict the possibility for generative AI development among the poor so as to grow their businesses.

    Yet, he also considers that AI can be a threat. AI governance is urgent.

    “Utterly misguided” think Wikipedia’s Mr. Wales – even urgent or not.

    He thinks the international authorities have hugely committed blunder by exaggerating the role played by tech gaints like Google in the tsunami wave of AI products. Mr Wales further notes that nobody has powers over individuals’ willingness to utilize AI technology even with all noble intentions.

    The programmer goes on by stating that numerous number have been programming outside the realm of the tech companies’ boundaries and accessing baseline programs that exist openly in the net. This means that there are in excess of 10, 000 individual developers developing upon these innovations; regulation will never happen.

    One example of an AI expert supporting the creation of such a body is that of Robert Opp. Currently, Mr Opp is the chief digital officer of the UN Development Programme.

    It assists governments in their missions of fighting poverty and promoting prosperity. In his work, he seeks to help technologically make the organisation’s reach greater.

    It entails using AI to speedily review satellite maps of farms located in poor regions. According to him, generative AI capabilities should not be hampered, nor will the ability to help the poor build businesses be denied.

    Nonetheless, he admits that AI could also have the negative outcome. Determining how AI should be governed is urgent.

    According to Mr Wales from Wikipedia, whatever the urgency, a wrong approach is totally wrong for the United Nations.

    He thinks that the international organizations make very serious mistakes exaggerating their role in the collapse of artificial intelligence applications that is called today and blaming them for that. According to Mr Wales, no matter how much goodwill a developer may possess; these same independent developers will always make use of AI.

    According to him beyond the frontiers of the tech giants, there are numerous programmers who do not have much money but they are using freely available AI-Software where baseline code is available on the Internet. There are thousands of companies that are trying to regulate these entities but have little success because there are tens of thousands of individual developers who are building on these innovations. Never will they be regulated.”

  • “Will Rishi Sunak’s significant summit rescue us from the AI dystopia?”

    “Will Rishi Sunak’s significant summit rescue us from the AI dystopia?”

    “If you’re uncertain about your stance on artificial intelligence, you’re in good company.”

    Is it meant to save mankind or kill off our world’s existence? It is that much a bet, and the verdict is yet uncertain, even amongst top specialists across the globe.

    Due to this reason, some AI creators have called for development of AI to slow down or stop entirely as the growth is too fast. Some say this will curtail the abilities of the tech to create formulations for new drugs as well as contribute to environmental solutions related to climate change.

    Next week, about one hundred of world leaders, IT business representatives, scientists and artificial intelligence specialists will meet in Bletchley (Great Britain). They act as discussants into ways of achieving the highest benefits of these potent resources using minimal risks associated with it.

    Will it give salvation to the mankind or doom us? Such stakes are very high and the jury is yet, even on the part of the world’s leading experts.

    There is a group that is asking to slow down or halt the development of AI, since they claim that it is developing too quickly. Yet some contend that such an approach is merely an attempt that would inhibit the tech from achieving incredible feats, such as formulating algorithms for new drugs and working out how to address issues relating to climate change.

    About 100 world leaders including top tech executives, professors, and AI researchers, will converge to a conference next week at Bletchley Park. They were set up to participate in talks on the ways of taking full advantage of this potent tool while minimizing any potential dangers.

    AI safety summit is the forthcoming UK’s event that focuses on the extreme risks like superintelligence and the intelligence explosion. This relates to what in common terms are referred to as “frontier AI” that does not exist yet albeit in such high speed the AI is evolving.

    The critics, however, argue that the major concerns surrounding the meeting should be concentrated on the immediate problems of AI such as its energy consumption as well as the effect it is making on job creation.

    A report published by the UK government last week highlights some disturbing possible dangers, such as bioterrorism, cyber-attacks, self-governing AI, and enhanced sharing of child sexual abuse deepfakes.

    Despite such frightening background, Prime Minister Rishi Sunak called upon them not “to lose any sleep”. There is an objective, but an ambitious one. He hopes that he will be able to make the UK world number on AI safety.

    However, when the summit was announced and presented as the world’s first, many people raised eyebrows. If the world’s high command would venture into a distant, leafy region of Britain in the harsh winter and near the USA’s Thanksgiving, it is highly unlikely.

    There is no official list of invited guests. It is quite evident by now that the United States tech giants are going to do impressively. High-level executives though, not necessarily at the CEO level.

    Commercial sector is full of enthusiasm for summit. Stability AI CEO Emad Mostaque said that the US-UK AI summit would be a once in generation chance for UK to unseal its AI superpowers status.

    “Our position is that we should urge the government with other policy makers for the support of AI safety in all ecosystems like corporate labs and researchers to keep Britain safe and competitive,” he waxed lyrical.

    They should play a very crucial role in the discussion, because they lead in production of the AI systems. However, it is as if these were discussions that they are already conducting on their own. Thought diversity is critical here.

    The world- leader contingent can be somewhat of a hodge-podge. Vice President of United States Kamala Harris will be there, but not their Prime Minister, Justin Trudeau. However, Ursula von der Leyen, president of the European Commission will attend but not Olaf Scholz the German Chancellor. Controversially, China has been invited to take part in a process that it does not have a sound relationship with western nations. However, it remains the strongest tecknation.

    The other interesting fact is that United Nations Secretary-General, Antonio Guterres is also travelling – which indicates a need for a global body to have some regulation over AI technologies.

    However, some experts are worried that the summit is tackling issues in the wrong order. They contend that the probability of extreme doomsday scenarios is relatively low, and rather more pressing challenges that exist on their doorstep will be much more unsettling to many individuals.

    cyborg 2

    As per Prof Gina Neff that heads an AI Centre in University of Cambridge, “our concerns include; what will happen to our jobs, news, means of communication, people, communities and ultimately our planet”.

    Four years is all it would take for the computing infrastructure necessary to power the AI industry to consume as much energy as what The Netherlands uses. The summit will not discuss this.

    Additionally, we are aware that AI has begun disrupting roles. I had a friend operating in a local marketing organization. Five copywriters – now just one who edits what ChatGPT produces. AI assisted benefits’ claim process in the department of work and pensions.

    But where does it get its data for training, then? commercial companies that own them keep these details a secret. However arguably, AI can just be said to be as good as what it comprehends about itself. Bias and discrimination have infiltrated most of the tools used by people today including the ready made ones.

    However, these problems will appear at the summit although they will not constitute its essence.

  • The chip maker that became an AI superpower…

    Shares in computer chip designer Nvidia have soared over the past week, taking the company’s valuation above the one trillion dollar mark.

    129842914 nvidiaa100gpu.jpg 1

    GETTY IMAGES.

    Nvidia was known for its graphics processing computer chips.

    That’s what being a trillion dollar company means – it connects with Apple, Amazon, Alphabet, and Microsoft in one club.

    This happened following the announcement of its second quarter earnings which came late in the night on Wednesday. The company’s spokesperson said that its chip production was being raised in response “to surging demand”.

    Today, Nvidia leads this sector of selling AI chips.

    That area of interest became a madness after ChatGPT became available publicly for the first time, spreading even beyond the field of technology.

    Starting from speech assistance and moving on through software coding and cookery, it is clear that ChatGPT is by far one of the most widespread applications of AI.

    The company has added itself to a unique club of 5 US billion dollar companies such as technology giants Apple, Amazon, Alphabet, and Microsoft.

    This happened after it announced its most recent quarterly figures that came out very late on a Wednesday. The corporation stated that it would increase chip manufacturing in light of “the rising demand”.

    Now, Nvidia rules the market of artificial intelligence (AI) chips.

    After release of ChatGBT last time November, interest for that sector reached madness and that was beyond only technologists.

    Chat GPT, one of the new applications of AI and one that is extremely popular, assists in giving speeches, coding, cooking and others.

    However, such could only happen if some strong computer hardware existed — in particular Nvidia chips originating from California.

    Most AI applications are supported by Nviidia hardware which was originally known as a company specialized in manufacturing graphics processing units for computer games.

    “Says Alan Priestley, a semiconductor industry analyst at Gartner, “This is the leading technology player for the new thing called artificial intelligence”.

    According to Dan Hutcheson, a TechInsights’ analyst, “NVIDIA is to AI as Intel was to PCs.”

    ChatGPT is a product that was taught utilizing 10,000 GPUs that were combined together and connected to a supercomputer, owned by Microsoft

    According to Ian Buck, general manager and vice president of accelerated computing at Nvidia, “IT IS ONE OF MANY SUPERCOMPUTERS – SOME PUBLIC AND SOME NOT – THAT WERE BUILT USING NVDIA GPUS FOR A RAN

    The recent research made by CB Insight noted that Nvidia holds about 95% GPU market share dedicated for machine learning.

    The chips are approximately of $10,000 (£8,000), but its newest and most powerful models sell at much higher prices.

    Now, where was Nvidia that it became the most important player on the AI revolution?

    In brief, a wager with courage on its own technology and a pinch of luck in timing.

    It is one of many such supercomputers; some public and private that are equipped with Nvidia GPUs for both scientific and AI purpose.

    According to one recent report by CB Insights, over 95 percent of the GPU market for machine learning is controlled by Nvidia.

    The company’s AI chips cost approximately $10,000 (£8,000). However, the latest and the most powerful version is much higher in price.

    Then, what made Nvidia the leader of an AI revolution?

    To sum up; bold betting of self-technology and little bit of time luckiness.

    One of its founders was Jensen Huang who is currently the CEO of nvidiaback in 1903. Secondly, Nvidia’s interest was in improving graphics for games and other purposes.

    It came up with GPUs in 1999 to enhance computer’s image display.

    GPUs are excellent when it comes to processing lots of these small jobs, otherwise known as parallel processing, which includes millions of pixels in this current instance.

    Researchers from Stanford University found out in 2006 that aside from the gaming purpose, the GPUs can also be utilized as a means of speeding up the math’s functions which are beyond the capability of the common processing chip.

    At that point, Mr Huang made a critical move towards what AI has attained now.

    Instead, he used his company’s resources into designing a programming tool that unleashed the potential of GPUs not only for graphics but also for other applications that take advantage of parallel processing.

    The company was among various other companies that were in the process of making their respective computer chip. Players only did not require this capacity and might not have known that they had it while the researchers offered another form of high performance computing at consumer hardware.

    Those were those capabilities responsible for triggering initial successes of contemporary AI.

    Alexnet, an AI that can classify images, was introduced in 2012. In training Alexnet, only two of Nvidia’s programmable GPUs were utilized.

    Instead of months that it might take for hundreds of the typical processing chips, the training process lasted only a few days.

    Word also quickly spread out to the computer scientists who began purchasing these cards in order to try applying them for running a brand new class of job.

    According to Mr Buck, “We were caught by AI”.

    Nvidia made an added advantage through pouring investment on different types of GPUs specialized for AI and other related softer wares which simplify its use.

    Ten years down the line plus billions of dollars in the making, ChatGPT was born- an artificial intelligent entity so human sounding in its responses.
    Deep fakes of Tom Cruise by Metaphysic in 2021.
    Photorealistic videos of celebrities and others are produced by an American AI start-up named Metaphysic. The Tom Cruise deep fakes it made that year had everyone talking about it.

    It runs and trains its models on hundreds of Nvidia GPUs; some bought at NVIDIA and other acquired via cloud computing services.

    According to Tom Graham, its co-founder and CEO, “Nvidia is the only option out there that we could use.” This is “way out in front”.

    Nonetheless while Nivdia seems to be safe at least in short time span, the long time prediction is hard to make. According to Kevin Krewell another industry analyst at TIRIAS research says, “Nvidia has a bull’s eye on it that every competitor is aiming to shoot.”

    Some competition comes from other major semiconductor companies. Although, other two companies also know much as CPU manufacture. AMD is a well-known producer, making dedicated GPUs for AI purposes (Intel entered this market pretty lately).

    Google’s TPUs are employed for both results on search and some specific machine learning functions, whereas one for the training of AI models in Amazon’s case.

    In addition, there are rumors of a Microsoft’s AI chip, as well as Meta’s own AI chip development project.

  • Nvidia says…US orders immediate halt to some AI chip exports to China.

    Tech giant Nvidia says the US has told it to stop shipping some of its advanced artificial intelligence chips to China immediately.

    131522245 40a5ca5450ad7139cea01cf7b81856c952f2d5ec.jpg

    EPA-EFE/REX/SHUTTERSTOCK

    However, the restrictions were to come into force 30 days after 17th of October.

    Then, the President Joe’ Bidem’ s administration revealed actions to stop states such as China, Iran, and Russia from purchasing first-class AI chips provided by Nvidia and other manufacturers.’)

    The timelines’ move forward was however not explained by Nvidia.

    NVIDIA said in a statement addressed to US SEC, “the Government has effectively ordered such licenses with immediate effect” however it further noted that “the strength of global demand for our products is unlikely to result in any material short term impact”.

    The new rules, however, prohibit shipments of the latest generation of Nvidia’s AI chips, initially developed for the Chinese market in compliance with export quotas imposed previously.

    The latest maneuver in the tech war between China and United States is the accelerated implementation of the Chinese tariffs.

    They were supposed to come into force thirty days from October, 17.

    At that time, the Obama administration unveiled the plans for the US government to prevent various nations like China, Iran and Russia from purchasing the advanced AI chips produced by Nvidia & co

    Nvidia did not give reasons for pushing up the timeline.

    NVIDIA commented to the SEC that the US government imposed these curbs “immediately”, however, NVIDIA also stressed that despite the high global demand for its goods, it is anticipated that this timing will exert no substantial impact on the company’s future short-run earnings.

    However, Nvidia’s latest constraints do not permit exports of its new AI chips engineered to meet earlier export rules.

    This latest step in the on-going US Technology Battle with China is a rapid ramp up of US curbs.

    While Chinese authorities are yet to comment on Nvidia’s announcement, it responded in its backlash to Biden’s decision of imposing new restrictions on the export of more advanced chips that was announced last week.

    According to the foreign ministry, it goes against the principles of a market driven economy and a free competition.

    This step was perceived as a way of addressing the gaps and loopholes found during the first wave of chip controls made in October 2017.

    The US claimed they were meant to stop China from receiving highly sophisticated technologies it would utilize more in developing its military, particularly in areas such as AI.

    The share price of Nvidia’s stock increased over three-fold because the company produced AI chips that were in high demand.

    The firm became part of the select group of trillionaire stocks in May, joining tech titans Apple, Amazon, Alphabet and MS.

    Nvidia, a company based in California, is now in control of the chip demand for AI systems.

    There have been no statements issued by chip giant, Advanced Micro Devices, which is another major supplier of AI chips in China. It failed to reply immediately to a BBC enquiry about it.

    When the BBC contacted the US Department of Commerce for comments, it declined to do so.

    The announcement attracted little attention outside China until Nvidia also came out publicly with its denouncement of the recently imposed new restrictions on advanced chip exports by the Biden administration.

    131522245 40a5ca5450ad7139cea01cf7b81856c952f2d5ec.jpg 1

    Country’s foreign ministry noted that these tariffs “violate the principles of a market economy and fair competition”.

    This is why the recent restriction is viewed as a follow-up after the first wave of clampdown on chips last year.

    This was when the U.S. claimed that those measures were meant to ensure China is unable to get advanced technology that can be weaponized, particularly in the field of AI.

    The skyrocketing demand for Nvidia’s AI chips has boosted the company’s share price by over 300%, placing it among the world’s top corporations.

    The firm became part of the elite companies with a stock market capitalisation exceeding one trillion dollars mark only in May next year joining Apple, Amazon, Alphabet and Microsoft.

    Chips for artificial intelligence (AI) are now dominated by companies based in California, led by Nvidia.

    No announcement was made by chip giant Advanced Micro Devices (AMD) that is also an artificial intelligence (AI) suppliers’ China about the expedited export restrictions. It refused to comment on a BBC request.

    When approached by the BBC, the US Department of Commerce chose not to comment on the Nvidia’s statement.

  • National Star College students to gain independence using AI technology…

    National Star College students to gain independence using AI technology…

    A new £6.2m residential building equipped with the latest technology is due to open at a college for students with disabilities.

    131515329 capture.jpg5 .jpg

    An interactive fridge allows students to see inside without opening the door by using the command “tell me what is in my fridge”

    pic credit by BBC News

    The smart house will be present in the accommodations at National Star College and voice-control facilities are involved in this process.

    This will give students an opportunity to adjust AI to fit individual demands.

    Later, disability campaigners Jack Thorne and Rachel Mason will open it.

    The institution which operates in Ullendwood, near Cheltenham offers education and rehabilitation for children with various health defects. It is believed that technology will enable the students to have more independence and better chances of getting adapted to life after graduation.

    This 13 roomed ‘Building a Brighter Future’ building is a one floor facility fitted with tracker system hoist for lifting the elderly and artificial intelligence features like talking fridges controlled by voice commands.

    “This is an opportunity for us to introduce these devices to the students in a friendly environment at National Star College,” explained Maizie Morgan, the Assistive Technology Technician.

    She argued thus, “The objective is for future as well as existing students to leverage this technology, see what is available out there, and eventually incorporate it in their bedrooms before leaving college”.

    According to principal Simon Welch, the technology has been modified in order to provide students with necessary assistance on a personal basis.

    We take into account the young people with disabilities and what matters most for them.’

    According to him, “This technology is not that innovative but rather how we work with people,” he added.

    The student, Jaspar Tomlinson, got a chance to try out the software before its launching.

    He is a nonverbal though can issue orders through smart machines based on an eye controlled electronic communicator.

    A single action word after that, can control devices and appliances in the rooms.

    He says, “It is good because it gives me confidence until I leave college.”

    Peter Horne, National Star deputy chief executive, said: “The development of this new accommodation will boost the quality of lives for the disabled young people with multiple physical and learning disabilities and will offer stimulating environments to dwell in, study and enjoy their lives.”

    According to him, “This technology is not that innovative but rather how we work with people,” he added.

    The student, Jaspar Tomlinson, got a chance to try out the software before its launching.

    He is a nonverbal though can issue orders through smart machines based on an eye controlled electronic communicator.

    A single action word after that, can control devices and appliances in the rooms.

    He says, “It is good because it gives me confidence until I leave college.”

    Peter Horne, National Star deputy chief executive, said: “The development of this new accommodation will boost the quality of lives for the disabled young people with multiple physical and learning disabilities and will offer stimulating environments to dwell in, study and enjoy their lives.”

    131515327 capture.jpg 1

    Jaspar Tomlinson is able to control devices in his accommodation using his electronic communicator.

    Student Jaspar Tomlinson had a chance to sample the app prior to the official launch.

    He does not talk, however he uses eye control of this electronic communicator to give commands to the smart devices.

    Thus, by simply uttering an action word users can control devices and appliance placed in the rooms.

    He said, “It’s cool because its builds up my self-esteem as I get ready to leave college.”

    Peter Horne, National Star deputy chief executive, said: “The new residential facility which will provide enhanced support for young physically and mentally impaired individuals who will have the opportunity of living, studying and recreation in an effective environment.”

  • Paedophiles using AI to turn Celebrities into kids…

    Paedophiles are using artificial intelligence (AI) to create images of Singer and Film Stars as children.

    131510265 gettyimages 1201670680.jpg

    Pic Credit By Joe Tidy Cyber correspondent.

    IWF stated that pictures of female celebrity remade to look like a child are distributed among pedophiles.

    Images of child actors are also being tampered with in one dark web forum, according to the said charity.

    Bespoke image generators can create hundreds of photos of actual victims of child sexual abuse.

    More recently, these kinds of details come from the IWF’s report on the increasing danger, and attempts to draw attention to this hazard.

    Ever since these powerful image generation systems became available in the public domain, researchers have cautioned that they could be used fraudulently to create explicit images.

    In May, Home Secretary Suella Braverman and US Homeland Security Secretary Alejandro Mayorkas released a joint statement warning against despicable AI-generated images that show adults engaging in sexual acts with minors.

    Predators are purportedly sharing images of a renowned female singer, imagined as a child.

    The organization asserts that in another dark web forum, pictures of the children actors are being edited to appear sexually.;

    Additionally, bespoke image generators are used to create hundreds of new photographs with actual subjects who are victims of child sexual abuse.

    It also gives the details gleaned from the IWF’s most recent report addressing the escalating predicament as a means of raising consciousness for the perils associated with pedophiles utilizing AI systems that convert simple commands into pictures.

    Once these powerful image generation systems became available to the public, researchers have cautioned about their abuse towards creating lewd pictures.

    May’s Home Secretary Suella Braverman and US Homeland Security Secretary Alejandro Mayorkas released a joint pledge to combat against “disgraceful artificial intelligence generated images of child sexual abuse perpetrated by paedophiles”.

    In its report, the IWF also outlines how more than 2,800 synthetic images that violate UK laws were identified by researchers who spent a whole month logging all artificial imagery on just one darknet site.

    According to analysts, a recent trend has seen predators taking just one photo of widely known young victims on child abuse scenes and using it as a basis for manufacturing multiple other such pictures in various sexually perverted environments.

    Into one folder were over 501 pictures of some little girl who was roughly 9-10 years old and sexually assaulted. Other’s were able to create additional images of “predators” by sharing a highly tuned AI model file into the same folder.

    Some of their imagery is so real; even adults would mistake it for real child pornography, e.g., images of even celebrities portrayed pretending to be children.

    They noticed photos primarily of younger looking female singers or actresses, which had been altered with the imaging software package to make them appear as kids.

    No names or specifics of those who were involved in the attack were mentioned in the report.

    The organization stated that it was publishing the study in order to have the problem placed on the agenda during the British AI Summit next week at Bletchley Park.

    Within one month, the IWF looked into 11,108 images generated by an artificial intelligence that were circulated in a darkweb child abuse forum.

    Out of this number, 2,978 were identified and certified that meet the definition of “images which fail the test of UK law”, meaning they showed child sexual abuse
    Out of the total 2857 pictures, more than twenty percent were rated as type A, which is the worst category.
    The majority of these photos comprised about 54% (1,372), which involved primary school-aged persons (seven to ten years).
    Besides this, there were 143 images displaying children between the ages of three and six years and two displaying babies (less than two year).

    a11173ad 971f 4521 bd21 3599c2bccfa8 1

    The fears that the IWF gave in June are now the reality as predators are already taking advantage of AI to create obscene pictures involving children.

    This was stated by Susie Hargreaves,, the CEO of IWF,, our worst nightmares have happened.

    Three months ago, we said AI imagery may soon appear as genuine photographs depicting abuse of children, and that these images are going to appear in abundance. This is already a fact.

    AI-based images remain harmful as observed in the IWF report. The images may not actually harm children directly but they legitimize and normalize predators’ behavior while wasting police manpower searching for the non-existent children.

    New forms of offences are also being investigated in some cases resulting into new complexities for enforcing agencies.

    One case where this happened was when another organization called the IWF located several pictures of at least two girls who were depicted in Category A sexual abuse scene based on their manipulated photos from an innocent photoshoot at non-nude modelling agency.

    In truth, they are being prosecuted for category A offences that did not occur at all.

    The IWF has now stated that its fears of such concerns turning into a reality have become true in June.

    In this respect, as indicated by Susie Hargreaves, the CEO of IWF, “our worst nightmares have become reality”.

    “As warned by us earlier in the year, the day would come when artificial intelligence imagery would soon resemble authentic photos of children being sexually abused; a stage that we are currently past.”

    The IWF report states in essence that true-to-life harm is a reality when it comes to AI images. However, these materials do not harm children directly during their production. In addition, photos of fictional victims become a normal part of predatory behavior. It may also be that police have to spend time searching for children whom do not exist.

    In some cases even new ways of committing offense are studied, putting extra problems in the way of law enforcement agencies.

    As an illustration, the IWF identified lots of photos of two girls who were not even ten years old in which their unaltered images taken for a photoshoot at a non-nude modeling company had been altered to make them appear in category A sex offenses.

    What this means is that they’re actually victims of Category A offences that did not take place.

  • Twelve Labs is building models that can understand videos at a deep level through AI..

    GettyImages 1247866134
    Image Credits: boonchai wedmakawand / Getty Images

    One aspect of text generative AI. However, image-understanding AI models could open up a whole new set of opportunities.

    Take, for example, Twelve Labs. As co-founder and CEO Jae Lee states, the San Francisco-based startup teaches AI models to deal with complicated audio-video alignment issues.

    He said that “Twelve Labs was set up with a view to developing the necessary infrastructure for multi-modal video understanding; the initial undertaking being video semantic search – ‘CTRL+F for videos.” “Twelve’s vision is about building tools for creating intelligent software that can hear

    The purpose of Twelve Labs’ models is to be able to link some words in natural languages with what is occurring within a video like actions, objects as well as some background sound that has enabled developers to develop mobile applications which are able to search through videos as well as classification of scenes and extr

    According to Lee, Twelve Labs’ technology can be used in things such as inserting ads and content moderation, like distinguishing which knife video is violent or pedagogical. Lee further added that it could be used as media analytics, and auto-generated highlight reel — or blog post headline and tag from video.

    During my conversation with Lee, I enquired her whether there are any chances of bias in such models being modeled and this is because established scientific fact indicates that models exacerbate the biases within and upon which they were trained.o As an example, a video understanding model trained based on mostly clips from local news—that focuses a lot of attention on crime, using racialized and sensationalized language—may internalize its own lessons by picking up racism and sexism along the way.

    One example of it would be text-generating AI. However, next-generation computer vision systems will open up for a whole world of possibilities.

    Take, for example, Twelve Labs. According to Jae Lee, a co-founder and CEO of the San Francisco-based start-up, the company trains AI models to “crack hard video-language alignment puzzles”.

    As explained by Lee to TechCrunch via email,” Twelve labs was established…for constructing a multimodal video comprehension platform with the initial aim being semantic search — or control F for videos.” The mission is for experts to be able to develop programs whose senses are similar

    The twelve lab’s models, try to map out some natural language with what is going on in a video such as; movements, objects and sound effects surrounding, which allow developers to make applications capable of searching inside the videos, splitting the videos into chunks or chapters, summing up the videos,

    Twelve labs’ technology can help in activities such as adding ads or moderating violent knife videos. According to Lee, it may also be applied in media analytics and to create highlight reels/blog post headlines or tags based on videos.

    Thus, when I asked Lee whether there is a potential for any kind of bias in these models, it is worth noting that it is long-standing science practice and theory that models tend to enhance the pre-existing data biases. As a case in point, teaching a video cognition model primarily with clips of local news – that frequently devotes much time to the racialized and sensational reporting about crime – may lead the model to incorporate both gender bias and racial discrimination.

    According to Lee, Twelve Labs tries to adhere to internal biases and “fairness” for the models’ performance before their release will happen. He also asserts that they plan to publish model-ethics-related benchmarks and datasets at one point. However, apart from that point there was nothing else to talk about.“As to the difference between our product and the large language models such as Chat GPT, our model is specifically designed and constructed to make sense of all things in video – integrating visual signals, audio, speech elements which together give.

    MUM is another multimodal model that Google is working on, which its incorporating into video suggestions across Google Search as well as YouTube. In addition to MUM, Google – along with Microsoft and Amazon – provide API-based, AI-enabled functionality for object, place and action recognition in videos with frame-level extraction of rich metadata.

    However, Lee maintains that Twelve Labs is segmented by its superior models and the automaton function of the platform where users can refine the platforms generic model with their own data for more “domain-specific” video analysis.

    TwelveLabs Finetune API1

    Mock-up of API for fine tuning the model to work better with salad-related content.

    Today Twelve Labs is launching its most recent “multimodal model”, called “Pegasus-1”; this model understands different types of prompts referring to full-video analysis. Pegasus-1 can also provide an in-depth description of a specific video or simply some highlights and timestamp.

    According to Lee “Most business use cases require understanding video material in much more complex ways than the limited and simplistic capabilities of conventional video AI models can provide. The only way enterprise organization can get a human level understanding without having to analyse video material in person is to leverage powerful multimodal.

    Lee claims that since launching in private beta back in early May, Twelve Labs currently has over 17k developers registered. Currently, the company is working together with an indeterminate amount of organizations in sectors such as sports, media, e-learning, entertainment and security. Moreover, recently the organization established partnership with NFL.

    Speaking of Twelve Labs, which is still raising money – this is a must-have component in any start-up. The firm said it has just closed a USD 10M strategic financing deal from Nvideo, Intell, an Samsung Next, taking its total up to USD 27M.

    According to Lee, this new investment targets strategic partners who can quicken the company in research (Compute), products and distribution. “According to research from our lab, it is a fuel for the continued video-understanding innovation which allows us to introduce the most potent models to end users regardless of potential uses. We push the industry boundaries at a time when enterprises are doing amazing stuff,” it says.

  • Apple’s job listings suggest it plans to infuse AI(Artificial Intelligence) in multiple products..

    Apple seems to be finally getting serious about infusing generative AI into its products — both internal and external — after announcing a solitary “Transformer” model-based autocorrect feature on iOS 17 earlier this year.

    Nevertheless, an idea emerges that Apple seems to be getting serious about generative AI integration within its devices as part of and without iOS 17 after launching of a single iOS version 17 with autocorrect feature based on a Transformer model earlier this year.

    In fact, the organization has issued jobs on a daily basis in the recent days due to its concerns about generalized artificial intelligence. For instance, in its statement of a role on the Apple store platform it states that the company is developing a generative AI based developer service tool for internal use which will assist its app development team.

    Image Credits: Haje Kamps / MidJourney

    A conversational AI platform (voice and chat) job task that will relate with customers in Apple Retail is stated in the job advertisement. Apple’s job advertisements include task of text-generation, long-form; summarization; question answering.

    Other companies also work on build core models and give this way a human-like conversational agent as the example of core build models. Once again, the firm has posted vacancies at Siri Information Intel unit, which covers among others Siri Search widget and so on. In addition, apple aims at finding partner from among developing localised models for device-based operation developers.

    However, this time round, Apple has been a lot clear in giving their word on who they want.

    According to the Bloomberg news, Apple will be spending over one billion dollars annually on advertising services and AI supported devices. “Really a pretty big miss”, says a source close to it to the Wall Street Journal.

    Still, there seem signs of seriousness in taking generative AI to the company’s internal product and external services, for example, after singular Transformer mode-based auto correction tool introduced in iOS 17 back on the start of a new year.

    In fact, this company posts job openings almost every week saying that for some reason generative AI needs people. For instance, in its statement of a role on the App Store platform, the organization states that they are currently developing a generative AI-based developer experience platform for internal use and that this will assist its app developers.

    Amongst other roles a conversational AI platform (voice and chat) Apple Retail should interact customers as stated in the job posting. Apple job listings refer to long-form text-generation as a task, summarization, and question answering.

    Other jobs in AI/ML consider they are creating fundamental designs and offer as an example of developing fundamental designs a “human-like conversational agent”. Again, the company has advertised for vacancies in fields including Siri informational intelligence that involves Siri and search widgets amongst others. Moreover, apple wants to partner with these builders building a mode of operation on device based localised model.

    Unlike in their previous advertising campaigns where they were not this explicit when listing people whom they wanted to reach out to, this time around, Apple has been crystal clear of who they wish to cater to.

    As per Bloomberg News, Apple is also planning to spend over a billion dollar annually in advertisement for promoting its AI products or services. As quoted by Wall Street Journal, a source close to it said that, “it’s quite a big miss not to ship generative AI products”.

    According to Bloomberg, the company intends to utilize LLMs to drive features such as sentence completion on the latest version of iOS targeted at powering Siri and Messages application. The report revealed that Apple is looking into incorporating generative AI -powered enhancements in apps and services like Xcode to assist developers as seen by the job posting above possibly. Additionally, it revealed AI-generated playlists for Apple Music, and AI -assisted writing for Pages and Ke

    Several news stories indicate that Apple is working on an internal project dubbed “Apple GPT.” Nonetheless, except in its operating system, very little has been released for end users so far. In contrast however, their competitors like meta, Google, and Microsoft are integrating AI across several hardware.

    As for the coming versions of iOS, according to Bloomberg, it is reported that the goal of the company is to draw upon large language models (LLMs) to empower such functions as sentence completion for the company’s virtual assistants such as Siri or the messages app. On this regard, the report noted that apple is developing generative ai assisted features in applications and services such as xcode which helps developer (this position above could be one of them) , ai generated music tracks on Apple music, and ai assisted writing for pages and keynote.

    Reports indicate that Apple may develop ‘Apple GPT’, and as of now, no such consumer product has been developed in this regard. However, Cupertino-based rivals of Apple like Google, Microsoft, and Meta have included their AI-powered features in different types of software and