Category: Tech

  • “Rishi Sunak acknowledges the threats and risks posed by AI while also highlighting its potential.”


    “Prime Minister Rishi Sunak has cautioned that artificial intelligence could potentially facilitate the development of chemical and biological weapons.”

    “In a worst-case scenario, Mr. Sunak warned that society might lose all control over AI, rendering it impossible to deactivate. While there’s ongoing debate about the extent of potential harm, he stressed the importance of not ignoring AI risks.

    In a speech aimed at positioning the UK as a global AI leader, the Prime Minister highlighted that AI is already generating employment opportunities. He acknowledged that AI’s advancement would stimulate economic growth and enhance productivity, even though it would impact the job market.

    During the Thursday morning address, the Prime Minister outlined both the capabilities and potential risks associated with AI, including concerns like cyber attacks, fraud, and child sexual abuse, as detailed in a government report.

    Mr. Sunak emphasized that one of the report’s concerns was the potential use of AI by terrorist groups to amplify fear and disruption on a larger scale. He stressed the importance of making safeguarding against the threat of human extinction from AI a global priority. However, he also added, ‘There is no need for people to lose sleep over this at the moment, and I don’t want to be alarmist.’”

    He expressed his “optimism” regarding AI’s potential to enhance people’s lives.

    A more immediate concern for many individuals is the impact AI is currently having on employment.

    Mr. Sunak highlighted how AI tools are effectively handling administrative tasks, such as contract preparation and decision-making, which were traditionally human roles.

    He emphasized that he believed education was the key to equipping people for the evolving job market, noting that technology has always transformed how people earn a living.

    For example, automation has already altered the nature of work in factories and warehouses, but it hasn’t completely eliminated human involvement.

    The Prime Minister argued against the simplistic notion that artificial intelligence would “take people’s jobs,” instead encouraging the public to view the technology as a “co-pilot” in their daily work activities.

    Numerous reports, including declassified information from the UK intelligence community, have outlined a series of concerns regarding the potential threats posed by AI in the coming two years.

    According to the “Safety and Security Risks of Generative Artificial Intelligence to 2025” report by the government, AI could be used to:

    1. Strengthen terrorist capabilities in areas such as propaganda, radicalization, recruitment, funding, weapons development, and attack planning.
    2. Increase occurrences of fraud, impersonation, ransomware, currency theft, data harvesting, and voice cloning.
    3. Amplify the distribution of child sexual abuse imagery.
    4. Plan and execute cyberattacks.
    5. Undermine trust in information and employ ‘deepfakes’ to influence public discourse.
    6. Gather knowledge on physical attacks carried out by non-state violent actors, including the use of chemical, biological, and radiological weapons.

    Opinions among experts are divided when it comes to the AI threat, as previous concerns about emerging technologies have not always materialized to the extent predicted.

    Rashik Parmar, CEO of The Chartered Institute for IT, emphasized that AI will not evolve like the fictional character “The Terminator.” Instead, with the right precautions, it can become a trusted companion in our lives from early education to retirement.

    In his speech, Mr. Sunak stressed that the UK would not hastily impose regulations on AI due to the complexity of regulating something not fully understood. He advocated a balanced approach that encourages innovation while maintaining proportionality.

    Mr. Sunak aims to position the UK as a global leader in AI safety, despite not being able to compete with major players like the US and China in terms of resources and tech giants.

    Most of the Western AI developers are currently cooperating, but they remain secretive about the data their tools use and their underlying processes.

    The UK faces the challenge of persuading these firms to adopt a more transparent approach rather than, as the Prime Minister put it, “marking their own homework.”

    Professor Carissa Veliz, an associate professor in philosophy at the University of Oxford, pointed out that the UK has been historically reluctant to regulate AI, making Sunak’s statement about the UK’s leadership in AI safety interesting. She added that regulation often drives impressive and crucial innovations.

    The Labour Party criticized the government for not providing concrete proposals on how it plans to regulate the most powerful AI models and called for action to ensure public protection.

    The UK is hosting a two-day AI safety summit at Bletchley Park in Buckinghamshire, with China expected to attend. This decision has sparked criticism due to tense relations between the two countries. Former Prime Minister Liz Truss has requested the rescission of China’s invitation, emphasizing the importance of working with allies.

    However, Mr. Sunak defended the decision, asserting that a serious strategy for AI requires engagement with all leading AI powers worldwide.

    The summit will bring together world leaders, tech companies, scientists, and academics to discuss emerging technology. Professor Gina Neff, Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, criticized the summit’s focus, arguing that the most pressing concerns, such as digital skills and the responsible use of powerful AI tools, are not adequately addressed.

    She emphasized that this poses risks for individuals, communities, and the planet.

    British Prime Minister Rishi Sunak delivers a speech on AI at Royal Society, Carlton House Terrace on October 26, 2023 in London, England. Peter Nicholls/Pool via REUTERS
  • “Can artificial intelligence be effectively regulated?”

    “Is it possible to effectively govern artificial intelligence? According to Jimmy Wales, the founder of Wikipedia, believing in such control is comparable to engaging in ‘magical thinking’.”

    “Many times, there are limited comprehension about the workings of internet by the politicians and their aids, and what can be accomplished,” according to Mr. Wales who has spent more than one day providing insight into technology and the importance of free speech to various politicians worldwide.

    “It’s like saying the United Nations should regulate [the image editing application] Photoshop or something”. He contends that it would make no sense.

    This summer, it became a fiery point of debate that if Ai was to be regulated, in what way? It was then when the United Nations secretary general for the first time called for a UN security council meeting to deliberate on the possible threats coming from artificial intelligence.

    Speaking in regard to everything from AI-powered cyber attacks, to the risk of malfunctioning AI, how AI can spread misinformation, and even the interaction between AI and nuclear weapons, Mr Guterres said: If no action is taken to mitigate these risks then we fail in our duty towards present day’s generation as well as the generations yet to come.

    Mr Guterres continues to set up a panel of inquiry of the UN on the nature of the required global regulation. Accordingly, it would be called the High-Level Advisory Body for Artificial Intelligence made up of “current and past government experts, industry representatives, civil society actors, and academics”.

    The agency will issue preliminary results by this year’s end. At the same time, last week some of the American technocrats like Elon Musk and Mark Zuckerberg – the head of Meta – met US legislators in Washington to debate the issue of developing laws on AI systems.

    Nevertheless, a number of AI insiders doubt the viability of this globally. One of these people is Pierre Haren, who has been studying AI for 45 years.

    With over seven years’ experience in computing giant IBM and leading a crew that helped install Watson “super computer technology” for their clients. A user is able to ask questions at Watson that has been debuted in 2010 and is among the first AI types.

    In spite of his education and experience background, he says that he was really amazed by the speed with which chat GPT as well as other generative AI applications appeared in the market since last year.

    In simple terms, generative AI is AI that can rapidly generate new content – including text, pictures, music and video. It will take one idea from one context and it will be applying into a new scenario.

    Such ability is human-like as Mr Haren says. “This is not as good as a parrot which will repeat whatever you input in to it,” he adds. “It’s making high-level analogies.”

    Thus, what kind of rule will restrict this AI going amok? No we can’t according to Mr Haren he says some counties won’t take these agreements.

    “There are countries in this world who are not cooperating, such as North Korean and Iran,” he states. They cannot be expected to have regulation in AI.

    Who will believe that Iran can try to look for the way how to destroy Israel to comply with AI regulations?

    So what are the guidelines for designing a set of instructions of controlling our AI rogue? Indeed, Mr. Haaran said we could not since some of the countries will not join the programmes.

    He says that there are countries such as North Korea, Iran, and us in a globalised world. Their understanding of AI is that it will be unrecognizable under their regulation.

    Regulation pie in the sky for non-cooperative actors!! What would you imagine Iran attempting to find ways of destroying Israel and keeping an eye on AI?

    The “AI for Good” program was established by Mr. Reinhard Scholl, a physicist. The focus is on locating and implementing feasible AI tools that can support the accomplishment of UN’s Sustainable Development Goals. These encompass such things as ending poverty, eliminating hunger, ensuring clean drinking waters to all citizens among others.

    AI For Good is a series of conferences that was initially born as an annual conference in 2017, but now it’s a routine calendar of webinars dedicated to each aspect of AI.

    AI for Good has struck a chord with more than 20,000 followers and there is obviously an appetite for “positive” AI. However, that does not mean that Mr Scholl is optimistic.

    “Yes” — he says it would be naive not to regulate AI, just like cars and toy makers would.

    He fears the potential of ill-intended individuals using AI to acquire dangerous capabilities, since the latter can act as an entry point to the acquisition process.

    He says, “A physicist can show you how to construct a nuclear bomb on paper, but in real life, it is rather difficult”. However, some people who use AI to design biological weapon do not have to be knowledgeable about such things.”

    “And therefore, somebody would use AI to make something very big damage.”

    However, what should the structure of a future UN-related regulatory body be on AI? Another suggestion is that it emulates ICAO, an international body supervising global security in air travels. This has 193 member nations.

    One such AI expert is Robert Opp, who supports establishing a panel that looks like the ICAO. The UN Development Programme has a Chief Digital Officer called Mr Opp.

    This organization aims at assisting countries in achieving economic prosperity and stamping out poverty. He seeks avenues for leveraging technology to amplify the organisation’s impact.

    This would involve using AI to do rapid checks on satellite photographs of poor farmlands. For instance, Mr Opp claims that his objective is not to restrict the possibility for generative AI development among the poor so as to grow their businesses.

    Yet, he also considers that AI can be a threat. AI governance is urgent.

    “Utterly misguided” think Wikipedia’s Mr. Wales – even urgent or not.

    He thinks the international authorities have hugely committed blunder by exaggerating the role played by tech gaints like Google in the tsunami wave of AI products. Mr Wales further notes that nobody has powers over individuals’ willingness to utilize AI technology even with all noble intentions.

    The programmer goes on by stating that numerous number have been programming outside the realm of the tech companies’ boundaries and accessing baseline programs that exist openly in the net. This means that there are in excess of 10, 000 individual developers developing upon these innovations; regulation will never happen.

    One example of an AI expert supporting the creation of such a body is that of Robert Opp. Currently, Mr Opp is the chief digital officer of the UN Development Programme.

    It assists governments in their missions of fighting poverty and promoting prosperity. In his work, he seeks to help technologically make the organisation’s reach greater.

    It entails using AI to speedily review satellite maps of farms located in poor regions. According to him, generative AI capabilities should not be hampered, nor will the ability to help the poor build businesses be denied.

    Nonetheless, he admits that AI could also have the negative outcome. Determining how AI should be governed is urgent.

    According to Mr Wales from Wikipedia, whatever the urgency, a wrong approach is totally wrong for the United Nations.

    He thinks that the international organizations make very serious mistakes exaggerating their role in the collapse of artificial intelligence applications that is called today and blaming them for that. According to Mr Wales, no matter how much goodwill a developer may possess; these same independent developers will always make use of AI.

    According to him beyond the frontiers of the tech giants, there are numerous programmers who do not have much money but they are using freely available AI-Software where baseline code is available on the Internet. There are thousands of companies that are trying to regulate these entities but have little success because there are tens of thousands of individual developers who are building on these innovations. Never will they be regulated.”

  • “Will Rishi Sunak’s significant summit rescue us from the AI dystopia?”

    “Will Rishi Sunak’s significant summit rescue us from the AI dystopia?”

    “If you’re uncertain about your stance on artificial intelligence, you’re in good company.”

    Is it meant to save mankind or kill off our world’s existence? It is that much a bet, and the verdict is yet uncertain, even amongst top specialists across the globe.

    Due to this reason, some AI creators have called for development of AI to slow down or stop entirely as the growth is too fast. Some say this will curtail the abilities of the tech to create formulations for new drugs as well as contribute to environmental solutions related to climate change.

    Next week, about one hundred of world leaders, IT business representatives, scientists and artificial intelligence specialists will meet in Bletchley (Great Britain). They act as discussants into ways of achieving the highest benefits of these potent resources using minimal risks associated with it.

    Will it give salvation to the mankind or doom us? Such stakes are very high and the jury is yet, even on the part of the world’s leading experts.

    There is a group that is asking to slow down or halt the development of AI, since they claim that it is developing too quickly. Yet some contend that such an approach is merely an attempt that would inhibit the tech from achieving incredible feats, such as formulating algorithms for new drugs and working out how to address issues relating to climate change.

    About 100 world leaders including top tech executives, professors, and AI researchers, will converge to a conference next week at Bletchley Park. They were set up to participate in talks on the ways of taking full advantage of this potent tool while minimizing any potential dangers.

    AI safety summit is the forthcoming UK’s event that focuses on the extreme risks like superintelligence and the intelligence explosion. This relates to what in common terms are referred to as “frontier AI” that does not exist yet albeit in such high speed the AI is evolving.

    The critics, however, argue that the major concerns surrounding the meeting should be concentrated on the immediate problems of AI such as its energy consumption as well as the effect it is making on job creation.

    A report published by the UK government last week highlights some disturbing possible dangers, such as bioterrorism, cyber-attacks, self-governing AI, and enhanced sharing of child sexual abuse deepfakes.

    Despite such frightening background, Prime Minister Rishi Sunak called upon them not “to lose any sleep”. There is an objective, but an ambitious one. He hopes that he will be able to make the UK world number on AI safety.

    However, when the summit was announced and presented as the world’s first, many people raised eyebrows. If the world’s high command would venture into a distant, leafy region of Britain in the harsh winter and near the USA’s Thanksgiving, it is highly unlikely.

    There is no official list of invited guests. It is quite evident by now that the United States tech giants are going to do impressively. High-level executives though, not necessarily at the CEO level.

    Commercial sector is full of enthusiasm for summit. Stability AI CEO Emad Mostaque said that the US-UK AI summit would be a once in generation chance for UK to unseal its AI superpowers status.

    “Our position is that we should urge the government with other policy makers for the support of AI safety in all ecosystems like corporate labs and researchers to keep Britain safe and competitive,” he waxed lyrical.

    They should play a very crucial role in the discussion, because they lead in production of the AI systems. However, it is as if these were discussions that they are already conducting on their own. Thought diversity is critical here.

    The world- leader contingent can be somewhat of a hodge-podge. Vice President of United States Kamala Harris will be there, but not their Prime Minister, Justin Trudeau. However, Ursula von der Leyen, president of the European Commission will attend but not Olaf Scholz the German Chancellor. Controversially, China has been invited to take part in a process that it does not have a sound relationship with western nations. However, it remains the strongest tecknation.

    The other interesting fact is that United Nations Secretary-General, Antonio Guterres is also travelling – which indicates a need for a global body to have some regulation over AI technologies.

    However, some experts are worried that the summit is tackling issues in the wrong order. They contend that the probability of extreme doomsday scenarios is relatively low, and rather more pressing challenges that exist on their doorstep will be much more unsettling to many individuals.

    As per Prof Gina Neff that heads an AI Centre in University of Cambridge, “our concerns include; what will happen to our jobs, news, means of communication, people, communities and ultimately our planet”.

    Four years is all it would take for the computing infrastructure necessary to power the AI industry to consume as much energy as what The Netherlands uses. The summit will not discuss this.

    Additionally, we are aware that AI has begun disrupting roles. I had a friend operating in a local marketing organization. Five copywriters – now just one who edits what ChatGPT produces. AI assisted benefits’ claim process in the department of work and pensions.

    But where does it get its data for training, then? commercial companies that own them keep these details a secret. However arguably, AI can just be said to be as good as what it comprehends about itself. Bias and discrimination have infiltrated most of the tools used by people today including the ready made ones.

    However, these problems will appear at the summit although they will not constitute its essence.

  • The chip maker that became an AI superpower…

    Shares in computer chip designer Nvidia have soared over the past week, taking the company’s valuation above the one trillion dollar mark.

    GETTY IMAGES.

    Nvidia was known for its graphics processing computer chips.

    That’s what being a trillion dollar company means – it connects with Apple, Amazon, Alphabet, and Microsoft in one club.

    This happened following the announcement of its second quarter earnings which came late in the night on Wednesday. The company’s spokesperson said that its chip production was being raised in response “to surging demand”.

    Today, Nvidia leads this sector of selling AI chips.

    That area of interest became a madness after ChatGPT became available publicly for the first time, spreading even beyond the field of technology.

    Starting from speech assistance and moving on through software coding and cookery, it is clear that ChatGPT is by far one of the most widespread applications of AI.

    The company has added itself to a unique club of 5 US billion dollar companies such as technology giants Apple, Amazon, Alphabet, and Microsoft.

    This happened after it announced its most recent quarterly figures that came out very late on a Wednesday. The corporation stated that it would increase chip manufacturing in light of “the rising demand”.

    Now, Nvidia rules the market of artificial intelligence (AI) chips.

    After release of ChatGBT last time November, interest for that sector reached madness and that was beyond only technologists.

    Chat GPT, one of the new applications of AI and one that is extremely popular, assists in giving speeches, coding, cooking and others.

    However, such could only happen if some strong computer hardware existed — in particular Nvidia chips originating from California.

    Most AI applications are supported by Nviidia hardware which was originally known as a company specialized in manufacturing graphics processing units for computer games.

    “Says Alan Priestley, a semiconductor industry analyst at Gartner, “This is the leading technology player for the new thing called artificial intelligence”.

    According to Dan Hutcheson, a TechInsights’ analyst, “NVIDIA is to AI as Intel was to PCs.”

    ChatGPT is a product that was taught utilizing 10,000 GPUs that were combined together and connected to a supercomputer, owned by Microsoft

    According to Ian Buck, general manager and vice president of accelerated computing at Nvidia, “IT IS ONE OF MANY SUPERCOMPUTERS – SOME PUBLIC AND SOME NOT – THAT WERE BUILT USING NVDIA GPUS FOR A RAN

    The recent research made by CB Insight noted that Nvidia holds about 95% GPU market share dedicated for machine learning.

    The chips are approximately of $10,000 (£8,000), but its newest and most powerful models sell at much higher prices.

    Now, where was Nvidia that it became the most important player on the AI revolution?

    In brief, a wager with courage on its own technology and a pinch of luck in timing.

    It is one of many such supercomputers; some public and private that are equipped with Nvidia GPUs for both scientific and AI purpose.

    According to one recent report by CB Insights, over 95 percent of the GPU market for machine learning is controlled by Nvidia.

    The company’s AI chips cost approximately $10,000 (£8,000). However, the latest and the most powerful version is much higher in price.

    Then, what made Nvidia the leader of an AI revolution?

    To sum up; bold betting of self-technology and little bit of time luckiness.

    One of its founders was Jensen Huang who is currently the CEO of nvidiaback in 1903. Secondly, Nvidia’s interest was in improving graphics for games and other purposes.

    It came up with GPUs in 1999 to enhance computer’s image display.

    GPUs are excellent when it comes to processing lots of these small jobs, otherwise known as parallel processing, which includes millions of pixels in this current instance.

    Researchers from Stanford University found out in 2006 that aside from the gaming purpose, the GPUs can also be utilized as a means of speeding up the math’s functions which are beyond the capability of the common processing chip.

    At that point, Mr Huang made a critical move towards what AI has attained now.

    Instead, he used his company’s resources into designing a programming tool that unleashed the potential of GPUs not only for graphics but also for other applications that take advantage of parallel processing.

    The company was among various other companies that were in the process of making their respective computer chip. Players only did not require this capacity and might not have known that they had it while the researchers offered another form of high performance computing at consumer hardware.

    Those were those capabilities responsible for triggering initial successes of contemporary AI.

    Alexnet, an AI that can classify images, was introduced in 2012. In training Alexnet, only two of Nvidia’s programmable GPUs were utilized.

    Instead of months that it might take for hundreds of the typical processing chips, the training process lasted only a few days.

    Word also quickly spread out to the computer scientists who began purchasing these cards in order to try applying them for running a brand new class of job.

    According to Mr Buck, “We were caught by AI”.

    Nvidia made an added advantage through pouring investment on different types of GPUs specialized for AI and other related softer wares which simplify its use.

    Ten years down the line plus billions of dollars in the making, ChatGPT was born- an artificial intelligent entity so human sounding in its responses.
    Deep fakes of Tom Cruise by Metaphysic in 2021.
    Photorealistic videos of celebrities and others are produced by an American AI start-up named Metaphysic. The Tom Cruise deep fakes it made that year had everyone talking about it.

    It runs and trains its models on hundreds of Nvidia GPUs; some bought at NVIDIA and other acquired via cloud computing services.

    According to Tom Graham, its co-founder and CEO, “Nvidia is the only option out there that we could use.” This is “way out in front”.

    Nonetheless while Nivdia seems to be safe at least in short time span, the long time prediction is hard to make. According to Kevin Krewell another industry analyst at TIRIAS research says, “Nvidia has a bull’s eye on it that every competitor is aiming to shoot.”

    Some competition comes from other major semiconductor companies. Although, other two companies also know much as CPU manufacture. AMD is a well-known producer, making dedicated GPUs for AI purposes (Intel entered this market pretty lately).

    Google’s TPUs are employed for both results on search and some specific machine learning functions, whereas one for the training of AI models in Amazon’s case.

    In addition, there are rumors of a Microsoft’s AI chip, as well as Meta’s own AI chip development project.

  • Censys lands new cash to grow its threat-detecting cybersecurity service….

    Image Credits: baranozdemir / Getty Images

    It appears that investments in cybersecurity companies are starting to come at the crossroads.

    Despite a tough summer, total venture capital investment to security start-ups grew from approximately $1.7 billion to almost $1.9 billion in Q3 according to Crunchbase, representing a small increase of 12%. That’s still down 30% annually. However, it’s just a mild indication that ventures hunger for cyber is once again on the rise.

    The reasons may include increasing expenditures on enterprise cyber security among other factors.

    As noted in an IANS Research brief, corporate IT security spending increased by about six percent from 2022 to 2023 – only marginally meaningful. Accordingly, most of the respondents pointed out to “increased risk” and “digital transformation.” That is, digitizing legacy apps, products and services.

    This helps one startup called Censys that offers insights on vital internet-connected infrastructure that are of interest to customers. Censys today revealed that it secured $50M in Series C financing and another $25M debt round, resulting in a cumulative figure of $178.1 million of funds raised.

    Censys traces back to ZMap invented by the organization’s chief scientist Durumeric in 2013. In 2015, after understanding that this program can be used for enhancing instead of monitoring the states of insecurity on the internet, Durumeric has mobilized some team to revamp and develop ZMap’s performance.

    According to Brad Brooks, the CEO of Censys, their belief remains unchanged as stated during an email interview at TechCrunch. Perfect visibility of any element that has a link with exposure to the internet is the only way through which vulnerable people and organisations can be protected against external threats.

    Thus, Censys has now a variety of mechanisms for tracking the hosts, as well as their respective security states or statuses. The customers enjoy database of vulnerabilities “enhanced by context”, as well as an environment for watching and assessing their openness-oriented assets (for example, servers and office computers).

    According to Brooks, most users have been leveraging the platform as a threat hunter tool where they hunt for and investigate potential security compromises, while also integrating the asset monitoring technology from Censys. Others use Censys to assess how effective their security measures have been so far in terms of length and intensity of risk.

    According to Brooks, “Censys utilizes leading internet intelligence to provide leaders and their teams with the data they need to stay ahead of sophisticated cyber-attacks.” They may also employ the benchmark of Census which are used in assessing a company performance over a period and creating industry standards.

    In recent times, Censys got into the generative AI wagon and created a chatbot that allows people to search their security databases using natural languages. For instance, they may query the platform by putting forth requests such as “show me all Australian locations with both FTP and HTTP”. For instance, they can use the chatbot to translate search syntax generated from a third party threat hunting platform such as Shodan and/or Zoomeye into search language for Censys can understand.

    mage Credits: baranozdemir / Getty Images

    Censys is a firm that claims to have 350,000 subscribers in its free service and above one hundred eighty (180) clients, including the FED and the department of homeland security, among others. This firm intends to use its As of now, it should be noted that by the end of 2023, Censys from Ann Arbor, MI should hire a hundred and fifty employees.

    According to Brooks, the demand for cybersecurity is still high even with economic uncertainty. In fact, there were positive winds due to major enterprises moving to cloud based solutions and relocating thousands of employees to working remotely that drove up demand and queries.

    It is not easy for Censys to sustain speed and maneuver over fierce headwinds.

    The remaining months of 2016 are threatened with a stagnant venture funding or perhaps even a decline due to issues in the Europe and the relation between US and China. The higher interest rates are reducing opportunities for new investment which then impacts startups ability to obtain money.

    Hedge funds, late stage and crossovers – investors into publicly traded and privately held corporations – have retreated from the market due to anticipated recession – in general, technology sector.

    However, on another note, there are grounds for hope.

    As noted in a recent Spiceworks report, although most businesses fear a recession in 2024, it does not mean that two-third (about 66%) of them do not plan to increase the annual budget and include allocations for cyber security in 202 Accordingly, according to IDC there will be cyber security spending of over $219 billion around the globe for this year, rising up to almost $300 billion by 2026 (IDC, 2020).