Author: Faraz

  • How generative AI is set to make cybersecurity stronger in a world where trust is hard to come by.

    CISOs and CIOs still grapple with whether it is better to use generative AI as a real-time learning engine, which records information regarding behavior, telemetry, intrusion, and breach, than to consider the associated risks. It aims at establishing new muscle memory on threat intelligence to predict, prevent, and enhance SecOps activities.

    However, faith in GAI is divided. During an interview conducted by VentureBeat with several CISOs in different manufacturing and service companies, it discovered that although there is much opportunity for productivity improvement on the side of marketing, as well as the operations and particularly security departments, board members frequently wonder about the risk of leakage and loss of Deep Instinct’s recent survey, Generative AI and Cybersecurity: Bright Future of Business Battleground? it calculates the directions in the voice of CISOs interviewed by VentureBeat. The study revealed that out of all organizations that adopted generative AI tools, 69%, of cybersecurity professionals believe their companies are highly vulnerable to any attack. CISOs and security leaders believe that these weaponized AI attacks will happen, at 88%.

    Eighty-five percent indicate that gen AI probably fueled current attacks, noting the revival of WormGPT which is a new generative AI advertised to attackers interested in executing phishing and BEC attacks. Gen AI “weapons”, available on the Darknet and Telegram, get to be top hits. For instance, it took FraudGPT only five months to reach its third-thousand subscribers in July.

    As such, many cases and cases continue to consider how this type of technology can be deployed as a continuous learning engine that is always recording behavioral, telemetry, intrusion, and breach data over the costs or risks it might create. It should create the muscle memory of threat intelligence that assists in anticipating the breach as well as making the SecOps tasks seamless.

    On the other hand, trust in gen AI is mixed. In an interview with VentureBeat earlier this year, several CISOs within different industry sectors including manufacturing and service admitted that such productivity enhancers as marketing automation, or marketing can lead to breaches of intellectual property (IP) and sensitive information confidentiality which is still a great concern Deep Instinct’s recent survey, Generative AI and Cybersecurity: Bright Future of Business Battleground? measure the trends that Venture Beats hears from CISO discussions. Research revealed that approximately two-thirds of organizations use generative artificial intelligence tools, and half of security experts regard such technology as an additional target for hackers. According to a report, eighty-eight percent of CISOs and security leaders believe that weaponized AI attacks cannot be avoided.

    Eighty-five percent feel these attacks have been fueled by gen AI with WormGPT resurfacing as a new generative AI advertised to hackers keen on undertaking phishing and e-mail compromise attacks on targets. Bestsellers become weapons-gen AI for sale on the dark web or via Telegram. For instance, it took FraudGPT three months to have over 3,000 subscriptions by July.

    According to Sven Krasser, the chief scientist and senior vice president at CrowdStrike, cyber attackers are quickly adapting to use large language models (LLMs) and generative AI for malicious purposes. Krasser highlighted that cybercriminals are incorporating LLM technology into activities like phishing and malware. He pointed out, though, that while this increases the speed and volume of attacks, it doesn’t significantly change the quality of the attacks.

    Krasser added that cloud-based security, utilizing AI to analyze signals globally, is an effective defense against these new threats. He noted that generative AI doesn’t necessarily raise the bar for malicious techniques, but it does make it easier for less skilled adversaries to be more successful.

    Max Heinemeyer, the director of threat hunting at Darktrace, stressed the importance of businesses adopting cyber AI for defense before offensive AI becomes widespread. He emphasized that in a scenario where algorithms are pitted against each other, only autonomous responses can effectively counter AI-augmented attacks at machine speeds.

    A strong pro of Gen AI is its continuous ability to learn. Particularly where it concerns making sense of the huge volumes of information that sensors generate. New use cases that will be used by the CISOs and CIOs are driven by continuous updating of the threat assessment and risk prioritization algorithms, which also become compelling. The latest partnership between Ivanti and Securin seeks to improve on the precision and real-time risk prioritization algorithms as it targets different main objectives meant to better prepare their clients for more secure positions.

    Ivanti and Securin in order to refresh the risk prioritization algorithms via combining Securin’s Vulnerability intelligence and ivanti neurons for a vulnerability knowledge base which offers its customer security expert an opportunity for a swift vulnerability assessment and prioritization process. Through the partnership with Securin, we can offer clients on all vulnerability sources thorough intelligence, as well as an approach to risk prioritization through employing augmented AI in humans, noted as the chief product officer of Ivanti, Doctor Srinivasa Mukkamala Generative AI-based cybersecurity platform, system, and solution are forecasted to show a compound annual growth rate of 22 percent to reach USD 11.2 billion by 2032 compared to USD 1.6 billion in 2022. Generative AI will be supporting over 70 percent of business’ cyber security in 5 years according to Canalys.

    Forrester defines generative AI use cases into three categories: creating content, forecasting behaviors, and expressing knowledge. Security systems have utilized both AI and ML for some time now.6itempty Most of the security tools that have been developed in the last decade rely on ML to a large extent. For instance, adaptive and contextual authentication can be employed in developing risk scoring based on the heuristic rule, naïve Bayes, and logistic regression analytics as Mellen, Allie Forrester’s principal analyst puts it.

    A distinctive advantage of Gen AI is its continuous learning capacity. Particularly in identifying huge volumes of data generated by endpoints. Compelling new use cases are expected to be generated because of having continuously updated threat assessment and risk prioritization algorithms, which CISOs and CIOs anticipate will change behavior and facilitate early threat prediction. In this case, Ivanti is partnering with Securin in order to improve the accuracy of their risk prioritization algorithms in real-time to achieve other goals that will bolster the organization’s security posture.o

    Ivanti has partnered with Securin to enhance risk prioritization algorithms by marrying up Securin Vulnerability Intelligence (VI) and Ivanti Neurons for Vulnerability Knowledge Base, which will provide near-real vulnerability threat intelligence for their customer’s security teams In collaboration with Securin, we deliver comprehensive situation awareness on all threats regardless of the origin through AI augmented-human intelligence; by leveraging state-of-the-art Gen AI solutions we enable organizations to identify cybersecurity risks earlier in their development stage. Despite the fact that trust is today This will see market values rise from 1.6 bln dollars in 2022 to 11.2 bln dollars by 2032 at a 22% CAGR. Generative AI should support over 70% of business’s cyber security operations in under five years according to Canalys.

    Forrester defines generative AI use cases into three categories: content creation, predictive behaviors, and knowledge representation. The use of AI and ML by security devices, however, is not a novelty. Most of the advanced security tools invented over the last decade relied on ML in one or other variations. For instance, adaptive and contextual authentication have been used as the basis for constructing risk-scoring logic through heuristic rules and naive Bayesian classification and logistic regression analytics—Allie Mellen – Forrester Principal Analyst.

    How security leaders, including CISOs and CIOs, guide their boards in navigating the risks and benefits of generative AI will play a crucial role in shaping the technology’s future. Gartner predicts that by 2026, 80% of applications will incorporate generative AI capabilities, setting a trend already observed in most organizations.

    CISOs who find the most value in the initial generation of generative AI applications emphasize the importance of adaptability to their teams’ working methods. This adaptability extends to how generative AI technologies can complement and reinforce the broader zero-trust security frameworks they are actively developing.

    For CISOs venturing into the realm of generative AI, their focus is on taking a zero-trust approach to every interaction involving these tools, apps, platforms, and endpoints. This approach necessitates continuous monitoring, dynamic access controls, and constant verification of users, devices, and the data they use both at rest and in transit.

    The primary concern for CISOs is the potential introduction of new attack vectors by generative AI, catching them unprepared for protection. Particularly for enterprises constructing large language models (LLMs), safeguarding against query attacks, prompt injections, model manipulation, and data poisoning becomes a top priority. To fortify infrastructure against emerging attack surfaces, CISOs, and their teams are intensifying their commitment to the principles of zero trust. Source: Key Impacts of Generative AI on CISO, Gartner.

    Gen AI is used mainly as a knowledge management tool among security teams, and also as an alternative to the expensive and laborious system integration projects in large-scale business environments. This year, RSAC 2023 was overwhelmed by copilots based on chatbot technology. Vendors like Google Security AI workbench, Microsoft security copilot (launched before the show), Recorded Future, security score card, and Sentinel One were also launching chat GPT solutions.

    Given the know-how Ivanti has of its IT Service Management (ITSM) including cybersecurity and network security needs, it is leading in that domain. Ivanti is currently organizing a webinar on transforming IT service management using generative AI with Susan Fung as a guest speaker and principal product manager, AL/ML at Ivanti. In March at CrowdStrike Fal. con 2023 they announced twelve (IV) new solutions for cybersecurity. Falcon leverages Charlotte AI for fast detection, thorough investigations, and quick responses by means of natural language communication. Charlotte AI develops an LLM report that helps an analyst quickly analyze hacking incidents.

    In the coming nine months, the Raptor platform will start the initial updates of Charlotte AI for release by CrowdStrike Falcon to all its clients. Raj Rajamani, CrowdStrike CPO said that Charlotte AI makes security analysts become “two/three times more productive”. He told VentureBeat how Crowdstrike invested in Graph Database Architecture to feed up the power of Charlotte in endpoints, cloud & identities.

    It comes in handy at the moment of organizing knowledge within security teams or as an alternative to highly-priced integrational projects related to large-scale enterprises. The year’s dominance at RSAC was by chatbot-based copilots. Some of the vendors with ChatGPT solutions that also launched included Google Security AI Workbench, Microsoft Security Copilot (before the show), Recorded Future, Security Scorecard, and SentinelOne.

    Considering the awareness Ivanti has of its customers’ ITSM, cybersecurity, and network security needs this leader has become one of the leaders in this field. The company is holding an online seminar named How to Turn Service Management into Generative AI with Fung, principal product manager, of artificial language and machine learning Ivanti. This year, back in February at CrowStrike Fal. Con 2023 the provider announced no less than twelve new products, all of them on the same day. The Falcon portal has been enhanced in terms of enhancing threat detection and response by interacting with the Charlotte AI using natural language in the conversed mode. Charlotte AI produces an LLM-driven incident summary for quickening security investigators’ analysis of breach occurrences.

    For the next year, it will launch the Charlotte AI across the CrowdStrike Falcon customers using the Raptor platform, with the first updates coming in the period between late September and late December 2023. According to Raj Rajamani, the Chief Product Officer, of Crowd Strike, Charlotte, an AI product, makes security analysts two or three times more productive by automating repetitive functions. Rajamani explained that Crowd Strike invested a lot in its graph database platform that powers Charlotte’s functionality across.

    Exploits targeting cloud systems surged by 95% year-over-year as attackers continuously refined their tactics to exploit weaknesses in cloud configurations. Defending against these attacks has become one of the fastest-growing challenges for businesses.

    Looking ahead to 2024, VentureBeat anticipates an increase in mergers, acquisitions, and joint ventures specifically aimed at addressing security gaps in multi-cloud and hybrid-cloud environments. The recent acquisition of Bionic by CrowdStrike marks the beginning of a broader trend, signaling a collective effort to empower organizations to enhance their application security and overall posture management. Notable previous acquisitions in the cloud security realm include Microsoft’s acquisition of CloudKnox Security, CyberArk’s acquisition of C3M, Snyk’s acquisition of Fugue, and Rubrik’s acquisition of Laminar.

    This strategic acquisition not only bolsters CrowdStrike’s capability to offer consolidated cloud-native security but also underscores a broader industry trend. Bionic aligns well with CrowdStrike’s customer base, which primarily consists of organizations prioritizing cloud technologies. This reflects a growing reliance on acquisitions to enhance the potential of generative AI in the field of cybersecurity.

  • Is the EU AI Act in trouble? The OpenAI drama gives us some hints.

    The EU AI Act, aiming to be a significant piece of AI legislation, is currently facing uncertainty. The debate centers around how to regulate ‘foundation’ models, massive AI models like GPT-4, Claude, and Llama.

    Recently, the French, German, and Italian governments proposed limited regulation for these foundation models. Many argue that this move is influenced by heavy lobbying from Big Tech and open-source companies like Mistral, advised by Cédric O, a former digital minister for the French government. Critics view this as a “power grab” that could weaken the EU AI Act.

    On the other side, supporters of regulating foundation models within the EU AI Act are pushing back. Just yesterday, the Future of Life Institute, known for the six-month ‘pause’ letter, released another open letter urging the German government not to exempt foundation models. They argue that such an exemption would harm public safety and European businesses. Signatories include prominent AI researchers like Geoffrey Hinton and Yoshua Bengio, along with AI critic Gary Marcus.

    French experts, including Bengio, echoed these concerns in a joint op-ed in Le Monde, opposing what they see as attempts by Big Tech to undermine this crucial legislation in its final phase.

    So, why is the EU AI Act hitting a roadblock now? Despite being in the final stages of negotiation, two and a half years after draft rules were proposed, disagreements persist many months into negotiations. The EU AI Act focuses on high-risk AI systems, transparency for AI interacting with humans, and AI systems in regulated products. The current phase called the trilogy, involves EU lawmakers and member states negotiating the final details. The European Commission hoped to vote on the AI Act by the end of 2023, avoiding potential political impacts from the 2024 European Parliament elections.

    Surprisingly, the current drama involving OpenAI, whereby CEO, Sam Altman was ousted by the company’s non-profit board only to reinstate him again five days later after two other board members were dismissed and replaced, sheds light on similar controversies taking place within

    According to OpenAI, CEO Sam Altman and president Greg Brockman – two members of the non-profit board – supported this endeavor. Instead of investing in the development of AGI, they suggested the search for opportunities to make money. However, three non-employees on the board – Adam D’Angelo, Tasha McCauley, and Helen Toner, had greater concerns about AI “security” that went as far as shutting down OpenAI, rather than releasing a new technology that could possibly pose the risk Following Ilya Sutskever’s recruitment (pun intended), Altman got fired.

    Those three non-employee members were somehow linked to the Effective Altruism movement, as it turns out, with the whole lobbying effort around the European Union AI Act. Run by a president named Max Tegemark, the Future of Life Institute specifically deals with AI existential risk and also connects itself to effective altruism. Last week, the Wall Street Journal said that the Effective Altruism group has spent large amounts in telling people that artificial intelligence poses a big threat.

    It is hard to believe that the current drama involving OpenAI – where its CEO Sam Altman was fired by the nonprofit board only to be reinstated after five days following the removal of two other board members – unveils some key factors happening in the EU. Just as it happened in the case

    According to OpenAI, Sam Altman (CEO) and Greg Brockman (president), who have been members of OpenAI’s non-profit board, advocated for money-making ventures as a way of funding the creation of AGI. However, three other non-employees from the board; Adam D’Angelo, Tasha McCauley, and Helen Tonner are more into AI safety than AGI technologies which could result in something serious like the shutting down of FaceBook by the government. Following this, Altman got rid of chief scientist Ilya Sutskever (not a typo).

    All three non-employee members were linked with the Effective Altruism movement – and that goes back to lobbying over the EU AI Act. The Future of Life Institute is led by President Max Tegmark who is concerned with the problems created by AI existential risk and connects it to effective altruism. Last week, the Wall Street Journal stated that the EA community spent billions of dollars selling a perception that AI presents an extinction-level event.

    But Big Tech, including OpenAI, has been actively involved in lobbying as well. It’s important to remember that Sam Altman, the returning CEO of OpenAI, had sent mixed signals about AI regulation for months, especially in the EU. Back in June, a TIME investigation uncovered that while Altman was publicly advocating for global AI regulation, behind the scenes, OpenAI had lobbied to weaken “significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act.” Documents obtained by TIME from the European Commission through freedom of information requests revealed this effort to reduce regulatory constraints on the company.

    Gary Marcus recently pointed to the OpenAI situation as a reason for EU regulators to ensure that Big Tech doesn’t get the chance to regulate itself. Supporting the tiered approach backed by the European Parliament for handling risks linked to foundation models in the EU AI Act, Marcus emphasized on X: “The chaos at OpenAI only serves to highlight the obvious: we can’t trust big tech to self-regulate. Reducing critical parts of the EU AI Act to an exercise in self-regulation would have devastating consequences for the world.”

    Brando Benifei, one of the European Parliament lawmakers leading negotiations on the laws, highlighted the recent drama around Altman being ousted from OpenAI and subsequently joining Microsoft. He told Reuters last week, “The understandable drama around Altman being sacked from OpenAI and now joining Microsoft shows us that we cannot rely on voluntary agreements brokered by visionary leaders.”

    As of now, it still remains unclear if the EU AI act may actually be undone. This was stated by German consultant Benedict Kohn who penned down an analysis yesterday, pointing out that although additional negotiation rounds continue, “a deal must be made expeditiously

    Continuing, he pointed out that a fiasco for the EU AI Act would likely be a disheartening blow for all those concerned because the European Union has long portrayed itself as a leading figure in attempting to govern artificial intelligence.

    While negotiations continue, German advisor Benedikt Kohn pointed out in his analysis yesterday that there is still uncertainty surrounding the actual risk facing the EU AI Act. An agreement needs to be arrived upon before the trilogue scheduled for December 6 when Spain takes over as Council president and leaves it

    He further noted that a failure of the EU-AI Act would constitute a bitter blow to all those involved since the EU had already considered itself among the front-runner states with respect to regulating artificial intelligence.

  • Breaking News: There are reports that Google Drive may have lost some user data.

    There seems to be a problem with Google Drive, the widely used cloud storage service. Some users are reporting that their files are mysteriously disappearing. Numerous complaints have surfaced on the Google Support forum and other platforms, suggesting that users are losing access to specific files stored in their Google Drive accounts.

    According to information from Android Police, this issue seems to be affecting files uploaded after May 2023. A user from South Korea noticed that all files uploaded during that period had vanished from their Drive. What’s particularly concerning is that the affected files are nowhere to be found in folders, the trash, or even in the revision history. The user also made it clear that they haven’t shared their Drive with anyone else, ruling out the possibility of accidental deletion or someone un-sharing the files.

    More reports from different Google Drive users have surfaced, confirming the ongoing issue. One user shared the distressing experience of losing important files, while another mentioned being able to see folders and subfolders for some files but not the files themselves.

    The exact cause of this problem is still unclear, and it remains uncertain whether it impacts the web version, the app, or synced folders on computers. Users are advised to thoroughly check all possible locations where their files might be accessible.

    While Google has not officially acknowledged the problem, some users have reported receiving individual responses from Google Support on the Google Drive help forum. These users have been instructed not to make any changes to their Google Drive while Product Engineers conduct the investigation. A specific caution has been issued against modifying the root/data folder on their devices.

    As of now, the Google Workspace Status Dashboard indicates that the Google Drive service is fully operational with no reported issues. However, if you encounter problems with missing files, it’s recommended to either reach out to Google Support or patiently await updates as the ongoing cases are looked into. Additionally, exercise caution when using Google Drive and refrain from making any changes that could potentially jeopardize your files. Hopefully, Google will provide an update and/or a resolution soon.

  • Evaluating the Impact of Environmental Regulations and Spatial Effects on Energy-Environmental Efficiency: A Study of Green Technology Innovation in China

    Summary

    Improving energy-environmental efficiency (EEE) is vital for meeting energy conservation and emission reduction objectives. This research investigates the dynamics by which green technology innovation (GTI) influences EEE and explores the role of environmental regulation (ER) in shaping this relationship. Employing a Dynamic Spatial Durbin Model and analyzing panel data spanning 2003 to 2017 across 30 Chinese provinces, the study examines how GTI impacts EEE in the context of ER. The empirical findings indicate the following: (1) GTI exhibits a U-shaped impact on EEE, driven mainly by SubGI. (2) GTI’s influence on EEE is prominently manifested in PTE, also stemming from SubGI. (3) The interaction term between ER and GTI is 0.0022, with the GTI coefficient at -0.0741 and the GTI quadratic term coefficient at 0.0007—all statistically significant. This suggests that ER mitigates the negative impact of GTI on EEE while reinforcing its positive effect. These results offer empirical evidence and policy insights for optimizing the use of GTI and ER to enhance EEE and achieve energy conservation and emissions reduction objectives.

    Opening Statement

    Since the initiation of economic reforms and opening-up policies, China has experienced rapid economic development, giving rise to a growing contradiction among economic growth, energy consumption, and environmental pollution. According to the BP Statistical Review of World Energy in 2019, China’s primary energy consumption surged to 141.7 EJ, marking a 4.4% increase from the previous year and representing the fastest-growing energy consumer for 19 consecutive years. This escalating primary energy consumption not only signals a faster depletion of limited natural energy resources but also raises concerns about energy security.

    As a crucial indicator of a nation or region’s sustainable economic development, China’s carbon intensity in 2019 was 48.1% lower than in 2005. However, it still exceeded the global average during the period from 1990 to 2019. This underscores the imperative of controlling carbon emission intensity within the context of energy conservation and emissions reduction. Consequently, exploring ways to enhance energy-environmental efficiency (EEE) becomes a critical measure for reducing energy consumption and improving carbon intensity, serving as a key driver for achieving sustainable economic development in China.

    Green technology innovation (GTI) has emerged as a significant approach to ecological and environmental protection in various regions. It has demonstrated promising outcomes in controlling industrial energy consumption, reducing energy intensity, encouraging the development of green technologies, curbing the use of non-renewable energy sources, and harmonizing ecological conservation with economic growth. However, studies, such as that by Wang and Chen, have revealed that the relationship between resource dependency and haze pollution in Chinese prefecture-level cities is intricate. While GTI can reduce haze pollution when resource dependency is low or moderate, it may lead to the disappearance of optimization effects when resource dependency is high. Similarly, research by Mongo et al. indicates that environmental innovation may reduce carbon emissions in the long term but could have opposite effects in the short term. This suggests potential rebound effects associated with GTI, leading to increased resource consumption and carbon emissions. Consequently, it is crucial to assess the impact of GTI on EEE, considering the potential complexities of its effects.

    Moreover, studies by Zhang et al. and Xing and Dong highlight the importance of distinguishing between genuine green technology innovation and strategic green technology innovation, with the latter being influenced more by environmental regulations. This distinction reveals that the impact of GTI on EEE may vary depending on the nature of innovation, but current assessments often focus solely on GTI itself, neglecting the influence of its components on EEE.

    Furthermore, China has implemented a series of environmental policies since 1970 aimed at improving the environment through legal and market mechanisms, such as pollution fees and carbon trading markets. The interaction between environmental regulation (ER) and value-added tax has motivated enterprises to reduce the intensity of SO2 emissions and promote innovation in pollution reduction. Thus, when researching how to enhance EEE, considering ER is essential. However, existing research predominantly focuses on the impact of ER on EEE, overlooking the role of ER in the mechanism through which GTI affects EEE.

    To address these gaps, this study integrates the definition of GTI, decomposes GTI into Substantial green innovation (SubGI) and Symbolic green innovation (SymGI), and elucidates the underlying mechanisms through which GTI affects EEE. It aims to clarify whether the impact of GTI on EEE is driven by the development and promotion of new clean energy technologies or by improvements in ecological environmental protection and resource recycling technologies. Additionally, the study introduces the interaction effects of ER with GTI, SubGI, and SymGI, elucidating the role of ER in the mechanisms through which these components influence EEE. Lastly, spatial factors are incorporated through the use of spatial econometric models to explore the impact of GTI on EEE under the influence of ER, aiming for more realistic and reliable outcomes.

    This research contributes to the existing literature by providing a deeper understanding of the mechanisms of GTI through its decomposition. Simultaneously considering the moderating role of ER and spatial spillover effects aligns the study with the real-world context, offering a theoretical basis and reference for China and other developing countries to more accurately utilize GTI and ER for energy conservation and emissions reduction.

    Review of Existing Studies

    The trajectory of energy is molded by technology, and the innovation of technology paves the way for the future of energy. Within the confines of China’s dual carbon targets, the imperative of attaining energy conservation, emission reduction, and enhancing energy-environmental efficiency (EEE) has taken precedence. In the scrutiny of energy and the environment, EEE stands out as the most promising instrument for forging a balanced relationship between economic growth and resource consumption. Consequently, the mechanisms propelling EEE have become a focal point of extensive inquiry. The structure of the literature review is detailed in Table 1.

    IndicatorsDimensionSummaryReferences
    GTIPositive impactEnhanced environmental performance, profitability, core competitiveness, total factor carbon productivity in high-income economies, and carbon performance20, 21, 22, 23, 24
    Non-linear variationCritical point reached before a boost in green productivity, with an inverted U-shaped effect on regional carbon emissions28, 29, 30
    ERPositive effectPromotes green transformation of enterprises, fosters long-term economic growth, increases green innovation outputs, improves energy efficiency, and enhances green technological efficiency11, 14, 33, 34, 35, 36
    Moderating effectEnhances the impact of green knowledge innovation on CO2 emissions11, 26
    EEEPositive effectReduces pollution emissions, enhances sustainability of resource utilization, stimulates technological innovation, effectively counters environmental problems, and promotes sustainable development38, 39, 40, 41
    Affected by multiple factorsInfluenced by technological innovation, policies and regulations, FDI absorptive capacity, level of economic development, industrial and energy structure, etc42, 43, 44, 45, 46
    Consists of SE, PTEMultiplication of SE and PTE47, 48, 49

    GTI, positioned as a pivotal driver of sustainable development, has become a focal point for scholars across diverse fields. Initially, green innovation contributes not only to a company’s environmental performance, profitability, and core competitiveness but also bolsters total factor carbon productivity in high-income economies and carbon performance. This is attributed to green innovation steering companies toward more environmentally friendly production and operational methods, thus curbing resource waste and pollution emissions. Despite these varied benefits, the specific impact of GTI may differ depending on the industry or context. For instance, environmental regulations (ER) can amplify the effect of green knowledge innovation on CO2 reduction, prompting heavily-polluting enterprises to prioritize substantial green technology innovation during their green transformation. However, the fundamental reasons behind these impacts of GTI remain underexplored. Moreover, some studies neglect intrinsic driving factors of GTI.

    On another note, ER, functioning as a political tool, drives corporate green transformation, promotes long-term economic growth, enhances green innovation output, improves energy-environmental efficiency (EEE), and augments green technological efficiency. The rationale lies in moderate ER partially or fully offsetting enterprise innovation costs through the compensating effects of green innovation. However, ER’s implementation can also influence other factors, such as significantly increasing the impact of green knowledge innovation on CO2 emissions while having a less pronounced impact on green process innovation. Therefore, ER’s role is crucial in understanding the overall impact mechanism. Furthermore, environmental policies and carbon emissions exhibit spatial spillover effects, making the inclusion of spatial factors more realistic.

    EEE, a pivotal concept in the realms of the green economy and sustainable development, holds the potential to reduce pollution emissions, enhance sustainable resource utilization, stimulate technological innovation, and effectively address environmental issues, thereby promoting sustainable development. However, EEE is intricately influenced by various factors such as technological innovation, policy regulations, absorptive capacity of foreign direct investment, economic development level, industrial structure, and energy structure. It is a complex indicator requiring in-depth research and comprehensive consideration. Moreover, EEE is comprised of scale efficiency (SE) and pure technical efficiency (PTE), representing the product of these two efficiencies. SE is influenced by scale factors, while PTE is affected by management and technological factors. Hence, understanding which part of EEE is affected by various influencing factors is essential.

    In summary, despite extensive research on EEE, GTI, and ER in the field of energy-environment management, a research gap persists regarding the underlying mechanisms of GTI and the moderating role of ER. Given China’s energy structure, there is a critical need to enhance energy utilization efficiency through GTI, a cornerstone in achieving dual carbon objectives. To address the limitations in evaluating GTI’s impact on EEE, this study decomposes GTI into SubGI and SymGI, introduces interaction terms of ER with GTI, SubGI, and SymGI to explore the sources of GTI’s impact on EEE and the moderating effects of ER on this impact and its sources. Empirical analysis is conducted using spatial Durbin models, incorporating spatial spillover effects to better align with real-world development scenarios.

    Data Collection and Research Methodology

    Taking the year 2003 as the starting point due to the emphasis on development during the 16th National Congress of the Communist Party of China in November 2002, this study calculates the EEE of each province in China. The study investigates the impact of GTI on EEE under the influence of ER, with the calculation of EEE relying on the input indicator of capital stock measured by gross fixed capital formation. Unfortunately, this data is unavailable after 2017 in the China Statistical Yearbook. To ensure data availability, the study selects data from 30 provinces and regions in China (excluding Tibet, Hong Kong, Macao, and Taiwan) from 2003 to 2017.

    A two-step approach is employed to investigate the impact of GTI on EEE under the influence of ER. Firstly, EEE in 30 provinces is measured. Secondly, the impact of GTI on EEE under the influence of ER is analyzed, considering EEE, SE, and PTE as explanatory variables, and GTI, SubGI, SymGI as explanatory variables, with ER as a moderating variable. Control variables include gross domestic product per capita, industrial structure, foreign direct investment, energy consumption structure, urbanization level, R&D investment intensity, energy intensity, fixed asset investment, and R&D personnel input.

    Flowchart of the method.

    Description of Data

    Green Technology Innovation (GTI): GTI encompasses the advancement and application of novel products and technologies dedicated to environmental protection, pollution reduction, energy and resource conservation, and the promotion of sustainable development. In this context, green patents include inventions, utility models, and design patents specifically related to resource conservation, energy efficiency, and pollution prevention. To align with the definition of GTI, considering design patents only cover product shapes and patterns, this study employs the total number of green invention patents and green utility model patents to measure GTI. Given the time-consuming nature of the patent granting process in China, utilizing the total number of patent applications serves as a prompt and accurate reflection of enterprises’ willingness and motivation to engage in GTI. Moreover, in line with the comprehensive essence of GTI, encompassing both the development and promotion of new clean energy technologies and improvements in ecological environment protection and resource recycling technologies, this research attempts to decompose GTI into Substantive Green Innovation (SubGI) and Symbolic Green Innovation (SymGI).

    Substantive Green Innovation (SubGI): Substantive Green Technology Innovation focuses on the development and adoption of environmentally beneficial technologies that result in substantial reductions in resource consumption and carbon emissions. Green invention patents, exhibiting creativity, novelty, and energy-efficient features, align with the fundamental principles of Substantive Green Innovation. Consequently, this study uses the quantity of green invention patent applications as a metric to assess SubGI.

    Symbolic Green Innovation (SymGI): Symbolic Green Innovation aims to respond to government environmental policies, emphasizing improvements to existing technologies but typically lacks significant positive environmental impacts. Green utility model patents, representing product or process improvements with energy-saving and emission reduction characteristics, align with the core principles of Symbolic Green Technology Innovation. Thus, this study employs the quantity of green utility model patent applications as a metric to assess SymGI.

    Environmental Regulation (ER): In adherence to the polluter-pays principle, China initiated the imposition of emission charges in 1982. By levying charges on enterprises and individuals emitting pollutants, this policy incentivizes the adoption of environmentally friendly measures, reduction of pollutant emissions, and consequent mitigation of environmental pollution and resource wastage. In this study, the emission charges to GDP ratio is used as a measure of ER.

    This research incorporates control variables, following the definitions provided by several scholars: gross domestic product per capita (PGDP), industrial structure (IS), foreign direct investment (FDI), energy consumption structure (ECStruc), urbanization level (Urban), research and development investment intensity (RDI), energy intensity (EI), fixed asset investment (Fix), and research and development personnel input (RDP). The definitions of each variable are detailed in Table 10.

    Distance Function in Non-Radial Directions

    Calculation of Energy-Environmental Efficiency (EEE)
    This section of the study focuses on determining the dependent variable, EEE. Leveraging the flexibility of Data Envelopment Analysis (DEA), which does not necessitate a specific functional form, proves effective in measuring the EEE of Decision Making Units (DMU). In this context, DMU pertains to the 30 provinces from 2003 to 2017.

    The Non-radial Directional Distance Function (NDDF) emerges as a variant of the DEA method, providing distinct advantages in the assessment of EEE. Firstly, it facilitates the simultaneous consideration of multiple input and output indicators, allowing for a more comprehensive evaluation of regional efficiency. Secondly, NDDF introduces directionality, specifying the optimization direction and yielding more precise assessment results. Moreover, it permits the assignment of varying weights to different input and output indicators during the distance function calculation, reflecting their significance in the evaluation process. In essence, NDDF proves to be more versatile in assessing EEE by comprehensively incorporating multiple indicators and their respective weights, thus contributing to a more accurate evaluation of the efficiency levels of businesses or regions.

    The NDDF function is defined based on the principle of output expansion while minimizing pollutant emissions, as follows.

    In the formula, where K, L, and E represent input variables, Y stands for the desired output, and C denotes the undesired output. The specific input and output variables are detailed below:

    Input Indicators:

    1. Capital (K): The capital stock is estimated using the perpetual inventory method34.
    2. Labor (L): Labor is quantified based on the number of people employed at the conclusion of each year in each region.
    3. Energy (E): Energy consumption is gauged using the consumption of tons of standard coal in each region.

    Expected Output:

    • Total Output Value of Each Province (Y): This is derived by converting the nominal GDP to 2003 constant price GDP through a price index.

    Unintended Output:

    • CO2 Emissions (C): CO2 emissions are computed based on the calorific value of consumption of nine energy sources: raw coal, coking coal, crude oil, gasoline, kerosene, diesel fuel, fuel oil, natural gas, and electricity.

    Given the panel nature of the data encompassing both time series and cross-sectional dimensions, employing Ordinary Least Squares (OLS) for estimation would yield biased and inconsistent results19. While Generalized Least Squares (GLS) can be utilized for panel data, it fails to consider spatial effects, leading to biased estimates. Additionally, spatial autoregressive model (SAR), spatial error model (SEM), and spatial Durbin’s model (SDM) all address spatial factors. However, SAR overlooks cross-variable autocorrelation, and SEM neglects spatial dependency among variables. Therefore, this study has chosen SDM as the foundational model, extending it to include lag effects and thereby creating a DSDM.

    Findings and Discourse

    Energy-Environmental Efficiency (EEE)

    YearPTESEER
    20030.341***0.107***0.270***
    20040.376***0.118***0.288***
    20050.392***0.121***0.237***
    20060.422***0.127***0.225***
    20070.412***0.120***0.203***
    20080.408***0.117***0.198***
    20090.391***0.109***0.198***
    20100.383***0.102***0.197***
    20110.377***0.097***0.209***
    20120.398***0.101***0.219***
    20130.399***0.099***0.240***
    20140.400***0.102***0.240***
    20150.373***0.097***0.244***
    20160.395***0.104***0.247***
    20170.412***0.110***0.259***

    Environmental Regulation (ER)

    YearGTISubGISymGI
    20030.263***0.150*0.036**
    20040

    Model

    (5)(6)
    Weight
    0.7690***0.7740***
    (40.53)(46.03)
    0.7608***0.7741***
    (43.60)(41.95)
    0.7734***0.7689***
    (46.25)(43.51)
    − 0.0787(− 1.31)
    0.3383*(1.81)
    0.0941(1.16)
    − 0.0761(− 1.28)
    0.2852(1.52)
    0.1110(1.32)
    0.0077(1.35)
    0.0048(0.93)
    0.0027(0.50)
    0.0084(1.48)
    0.0021(0.39)
    − 0.0020(− 0.36)
    0.0017*(3.90)
    0.0014*(3.50)
    0.0013*(2.78)
    0.0016*(3.60)
    0.0010*(2.60)
    0.0008(1.60)
    − 0.0162*(− 3.07)
    − 0.1272*(− 4.99)
    − 0.0135(− 1.16)
    − 0.0223*(− 4.24)
    − 0.1083*(− 4.28)
    − 0.0122(− 1.05)
    − 0.0105(− 2.67)Yes
    − 0.0075(− 2.06)Yes
    − 0.0055(− 1.46)Yes
    Yes
    Yes
    0.0231*(3.18)
    0.0609*(3.08)
    − 0.0086(− 0.69)Yes
    − 0.0000(− 0.22)
    0.0001(0.63)
    0.0003*(1.80)Yes
    YesYes
    0.0014*(6.09)
    0.0031*(3.13)
    0.0006(1.19)
    YesYesYes
    Spatial rho0.0482(0.69)
    0.4532(2.33)
    0.1268(1.50)
    0.0080(0.11)
    0.4498(2.29)
    0.1548*(1.78)
    Sigma2_e0.0004*(15.50)
    0.0003*(15.74)
    0.0004*(15.62)
    0.0003*(15.51)
    0.0003*(15.73)
    0.0004*(15.64)
    R-squared0.980
    0.854
    0.971
    0.974
    0.933
    0.961
    Observations420
    420
    420
    420
    420
    420

    Table 4: Effects of Green Technology Innovation (GTI) and Environmental Regulation (ER) on Energy-Environment Efficiency (EEE)

    The results in Table 4 unveil the direct, spatial, and total impacts of GTI on EEE while considering the influence of ER. The coefficients representing the total effect of GTI are notably negative under the weighting matrices ( W_1 ) and ( W_2 )65. Moreover, the total effect mirrors the spatial effect, indicating that the restraining impact of GTI on neighboring regions surpasses its stimulating effect on the local region66. A plausible explanation for this phenomenon lies in the resource competition triggered by the promotion of GTI in neighboring areas. This study stands out by incorporating the quadratic term of GTI26,47. Remarkably, the analysis reveals that the coefficient for the quadratic term of GTI is significantly positive30, aligning with the trend of direct effects. This implies the existence of a critical point in the impact of GTI on EEE. Prior to reaching this critical point, the negative spatial spillover effect of GTI on EEE outweighs the positive direct effect, leading to a decline in EEE. However, post the critical point, the positive direct effect of GTI on EEE surpasses the negative spatial spillover effect, ultimately enhancing EEE. The underlying reason for this shift is that as GTI advances, the prerequisites for its success become more defined, fostering cooperative relationships among neighboring regions and expediting the adoption of green technology across various regions.

    Similarly, under matrices ( W_1 ) and ( W_2 ), the total effect coefficient of ER on EEE is significantly positive, indicating that the enforcement of ER contributes to EEE19. This result may be attributed to the use of the emission fees to GDP ratio as the ER measurement indicator in this study. Enterprises with excessive emissions face higher emission costs, elevating operational expenses and prompting businesses to update production equipment35. Consequently, this encourages companies to reduce pollutant emissions, leading to an improvement in EEE. Furthermore, the coefficient for the interaction term between ER and GTI is significantly positive, even though the impact of GTI on EEE follows a U-shaped curve. This implies that before GTI reaches its critical point, the implementation of ER mitigates the negative impact of GTI on EEE. The rationale behind this observation could be that in the initial phases of GTI implementation, businesses encounter high sunk costs. The implementation of environmental policies encourages businesses and individuals to invest in the research and application of green technology, thereby increasing financial support for green technology development. Once GTI reaches its critical point, the implementation of ER reinforces the stimulating effect of GTI on EEE. The primary reason for this is that the conditions required for green technology innovation are already mature, and businesses have developed awareness of green technology innovation. At this juncture, the implementation of ER can stimulate long-term investment and collaboration by businesses, expediting the development and application of green technology.

    Unveiling the Source of GTI’s Impact on EEE: A Deeper Exploration

    While our understanding of the influence of Green Technology Innovation (GTI) on Energy-Environment Efficiency (EEE) and the role of Environmental Regulation (ER) in shaping this impact has advanced, the specific origins of GTI’s influence on EEE still elude us. GTI encompasses both the development of new energy technologies and the enhancement of environmental protection and resource recycling technologies. To unravel this intricacy, the study delved into the decomposition of GTI into Substantive Green Innovation (SubGI) and Symbolic Green Innovation (SymGI), leading to a fresh empirical analysis detailed in Table 5. Under the weight matrix ( W_1 ), the coefficient of SubGI exhibits a significantly negative impact in the total effect, aligning with the trend observed in the GTI coefficient in Table 4. However, the coefficient of SymGI does not emerge as significant. This suggests that the effect of GTI on EEE predominantly emanates from SubGI31, implying that GTI chiefly influences EEE through the development and expansion of new clean energy technologies11. The potential reason behind this lies in the fact that new clean energy technologies not only curtail resource wastage but also diminish reliance on traditional fossil fuels, thereby further reducing greenhouse gas emissions66. Moreover, the role of ER and its impact on SubGI’s influence on EEE aligns consistently with the results obtained before the decomposition of GTI.sni

  • Upcoming Xbox Series X|S and Xbox One Game Releases to Look Forward To…

    Upcoming Xbox Series X|S and Xbox One Game Releases to Look Forward To…

    Every upcoming new game is released for the Xbox Series X|S, and Xbox One.

    As the Xbox Series X/S enters its third year, Microsoft’s gaming platform is gradually building up an impressive collection of games. Consoles usually hit their stride in the latter half of their lifespan when developers shift away from cross-gen projects to focus solely on creating games for the latest systems. While the Xbox Series X/S hasn’t reached that peak just yet, it’s moving in that direction.

    In 2022, we saw the release of several outstanding games, especially in the open-world genre. Titles like Elden Ring, Lego Star Wars: The Skywalker Saga, A Plague Tale: Requiem, and Tiny Tina’s Wonderlands all delivered excellent experiences, each offering something unique. The trend continued into the following year with anticipated projects such as Dead Space, Street Fighter 6, Hi-Fi Rush, Remnant 2, Starfield, Forza Motorsport, and Lies of P. There’s still a lot of gaming excitement to be had before 2023 concludes.

    Wondering what Xbox games are on the horizon? Curious about titles expected beyond 2023? Keep in mind, that we’re focusing on the North American release dates for Xbox Series X/S and Xbox One games, including expansions. As of November 22, 2023, the release schedule for the rest of the year seems pretty solid for significant projects. However, the lineup for 2024 is already taking shape.

    Recent additions to the schedule include Evil Diary, Park Beyond: Chicken Run: Dawn of the Nugget, Railbreak, Taxi Life: A City Driving Simulator, Devil Inside Us: Roots of Evil, Pinball M, Disney Dreamlight Valley: A Rift in Time, Metro Quester, Rising Dusk, Welcome to ParadiZe, and House Flipper 2.

    November 2023 has kept the gaming excitement alive, with noteworthy releases almost weekly. My Time at Sandrock, RoboCop: Rogue City and Microsoft Flight Simulator’s Dune expansion hit the shelves in the early days of the month. Nickelodeon All-Star Brawl 2 also made its debut, offering a fresh take for fighting game enthusiasts.

    One of November’s standout releases is Like a Dragon Gaiden: The Man Who Erased His Name, a spin-off set after Yakuza 6. Kiryu returns as the protagonist, and the game is available directly on Game Pass.

    For mainstream gamers, Call of Duty: Modern Warfare 3 is the obvious highlight this month. The sequel continues Task Force 141’s journey with a campaign that has sparked some controversy, and the multiplayer introduces a new spin on Zombies. In the world of zombies, The Walking Dead: Destinies is also available, though it hasn’t received the warmest reception. Additionally, Persona 5 Tactica debuted on Game Pass, adding to the diverse lineup of gaming experiences available to players.

  • How Autonomous Cars Benefit from Space Technology: Unveiling the Mechanics

    Cars empower individuals with the freedom to travel at their convenience. The emerging era of electric mobility promises to maintain this independence while offering ecological advantages. Yet, this transition relies significantly on space technology, particularly in the realm of software connectivity.

    “Space technology and the automotive sector have shared a close relationship for many years. Progress in space technology has played a pivotal role in driving enhancements in our terrestrial vehicles,” noted Ulrich Hermann, General Partner and Board Member at Einstein Industries Ventures, an affiliate of the European Space Agency (ESA), in a conversation with SAPVoice featured on Forbes.

    Cockpit of futuristic autonomous car.
    GETTY

    “Space presents formidable challenges, requiring the use of durable, lightweight materials like airbags, originally designed for space travel and now adapted for automotive use. The exploration of space has also given rise to groundbreaking technologies such as the Global Positioning System (GPS), transforming navigation on Earth.

    After four decades of evolution, the space industry is poised to play a significant role in the automotive sector, arriving at a crucial moment. Ulrich Hermann emphasized, ‘The automotive industry, a $3 trillion enterprise supporting 13 million jobs in the EU, is currently facing considerable risks.’

    The European automotive sector, responsible for one-fifth of global car production, has enjoyed sustained success, contributing €140 billion in annual revenue and €375 billion in taxes to EU governments. It stands as Europe’s leading investor in research and development, channeling €59 billion annually.

    Despite these achievements, there is no time for complacency.

    A Narrative of Change

    “Europe Faces Urgent Response to Global Shifts: China Dominates Electric Vehicle Market, Calls for Industry Evolution. China currently holds a 60% market share in global passenger car electric vehicle sales, with a significant surge in greenfield investment, up by 53% to €4.2 billion in 2022. The nation’s ambitious five-year plan aims for complete Vehicle-to-Everything (V2X) communication coverage by 2025.

    This global landscape is propelling the industry to swiftly transition towards a new model centered on delivering services related to mobility, autonomy, and connectivity, according to Hermann. He emphasizes the need for Europe to invest an additional $1 trillion to secure the industry’s future.

    Hermann further outlines the stages of autonomy, ranging from no automation to partial and high automation, culminating in full automation where human involvement becomes obsolete. Describing this level as disruptive, he envisions cars with their own will, devoid of steering wheels or pedals, capable of performing tasks as proficiently as experienced human drivers. However, achieving full autonomy will take at least another 15 years and is contingent upon advancements in satellite technology.”

    Progressing Toward Complete Autonomy

    “Hermann emphasized, ‘Profitability in the future will be driven by software-based services.’ In the upcoming model, individuals and businesses will subscribe to services featuring fully automated vehicles readily available to transport them to their destinations. This allows users to utilize their time for work or entertainment.

    The functionality of autonomous cars is facilitated by a combination of sensors, algorithms, machine learning systems, and robust processors, all governed by software. Sensors and radars map the vehicle’s surroundings, continuously monitoring nearby objects, while cameras provide a comprehensive view, potentially rendering traditional traffic signals and signs obsolete. Actuators assume control over acceleration, steering, and braking.

    For software-defined vehicles to operate at full capacity, 100% connectivity is essential. To ensure complete safety and full autonomy, these vehicles cannot rely on a single point of connectivity; they necessitate a network. Unlike cellular or mobile networks, which may face limitations outside populated areas or be susceptible to congestion or outages, a connection to a satellite network guarantees continuous coverage, eliminating the risk of the vehicle being out of range.”

    “The extensive process generates vast volumes of data, presenting a central challenge in the ongoing transformation. As SAP undergoes a transition to the cloud, it is establishing platforms to interconnect the digital twins of previously isolated analog entities and their processes. This initiative supports the shift from a mechanical, analog, and disjointed driving experience to a fully digital, connected one, underpinned by AI and an extensive ecosystem of applications.

    Torsten Welte, Global VP for Aerospace & Defense at SAP, remarked, ‘This is a notable illustration of the transformation unfolding across nearly every sector.’ According to Welte, businesses are poised to leverage increased satellite data to enhance their operations and decision-making processes, particularly in logistics, manufacturing, and ESG reporting.

    Welte emphasized the crucial role of integrating data and providing a unified data model for future AI models. Tools such as SAP Business Technology Platform and SAP Datasphere are specifically designed to empower IT organizations in efficiently expanding these capabilities for the benefit of the business.”

    Navigating the Divide

    Ulrich Hermann and the team at Einstein Industries Ventures emphasize that the emerging automotive industry is undeniably dependent on space technology. Hermann explains, “We are now entering a new era called the economy of things, which seamlessly integrates space technology, smart cities, clean energy, ecological cars, and software industries.”

    The primary concern is that European players in both the space and automotive sectors have begun to evolve but still lack the essential core capabilities. In contrast, the US automotive and space ecosystem is already a decade ahead, benefiting from significant investments and the emergence of new, risk-taking entities.

    Hermann concludes, “Space is a costly endeavor, and the EU needs a long-term vision to stimulate the development of its space industry.” The collaboration between European politics, the automotive industry, and the space ecosystem is crucial.

    Once aligned, they stand to gain economic benefits across various sectors. Additionally, Europe will experience the advantages of decarbonization as clean, electrified modes of transportation replace outdated, fossil fuel-driven vehicles.

    To attract and retain customers, banks must give employees the advanced digital tools and user experience necessary to drive engagement, growth, and satisfaction during every customer interaction.
    GETTY

    With an emphasis on revenue growth identified as the primary goal in the research, 41% of midsize financial institutions surveyed are directing their efforts toward cultivating a workforce of “customer-centric employees” to establish a more competitive differentiation.

    The findings from IDC highlight a wider industry trend, showcasing the financial sector’s inclination to adapt to the continuously changing economic landscape and align business strategies with the evolving dynamics of global markets. Notably, more successful banks are taking an additional step by reassessing their core technology framework to elevate the customer experience and ensure long-term resilience throughout their ecosystem.

    Preparing Midsize Banks for the Future

    In order to attract and retain customers, banks need to provide their employees with cutting-edge digital tools and a user experience that fosters engagement, growth, and satisfaction in every customer interaction. This enables employees to actively contribute to process improvement, make informed decisions, and deliver relevant products and services.

    Banks that operate within a secure and agile cloud environment gain immediate access to transformative technologies, including generative, machine learning, and predictive analytics. This access empowers organizations to responsively develop new business models in the cloud, aligning with evolving banking requirements and expectations while avoiding disruptions, improving efficiency, funding innovation, transforming mission-critical systems, and mitigating risks.

    Consider the example of AstroBank Public Company Limited, a midsize bank that recognized the opportunity to enhance its accounting system for better support of its growth. The bank successfully constructed a tightly integrated, next-generation ERP landscape in the cloud, marking a significant digital transformation. This initiative has enabled the bank’s workforce of approximately 450 employees to streamline accounting processes, ensure compliance, and elevate financial operations, resulting in improved productivity, speed, and innovation.

    The ongoing transformation journey has yielded substantial structural improvements, including:

    • A remarkable 3,000% reduction in financial closing time.
    • A 30% decrease in the cost of managing financial operations.
    • A 30% increase in productivity, with expectations for a 50% further boost.

    According to IDC research, AstroBank is part of a broader trend in reframing core ERP systems in the cloud. Surveys indicate that over 98% of financial institutions globally have already adopted cloud technology for at least one or two workloads, with many aiming to adopt a “cloud-first” approach in the future. Notably, 39% of institutions currently using on-premise ERPs plan to transition these platforms to the cloud within the next 12 months.

    The adoption of cloud ERP enables banks to fully meet customer needs and expectations by facilitating smoother connectivity, data-driven intelligence, operational effectiveness, financial insight, and risk control. This shift empowers growth-focused banks to establish a comprehensive platform offering banking and related nonbanking services – ranging from simple after-sales services to complex outcome-as-a-service models and data monetization.

    Furthermore, critical workloads such as payments, core processes, and insurance systems are increasingly migrating to the cloud. This strategic approach enables banks to move beyond traditional financial reporting and profit-and-loss analyses, leveraging advanced data analytics to gain deeper insights and plan more strategically.

    “The Onset of Enduring Customer Outcomes”

    As indicated in IDC’s research and demonstrated by AstroBank’s journey, cloud technology is swiftly becoming an integral component of financial institutions’ strategies for 2024 and beyond. This widespread adoption signifies a collective acknowledgment of the technology’s capacity to fulfill customer expectations, enhance operational effectiveness, and open up new avenues for growth.

    As the industry progressively embraces cloud-first approaches, the integration of the latest innovations and strategic planning emerges as a pivotal foundation for midsize banks. This holds particular significance for midsize banks aiming not only to withstand but also to flourish in an era characterized by continual change and escalating competition.

  • The application of human-like techniques allows AI on edge devices to continue learning over time.

    The application of human-like techniques allows AI on edge devices to continue learning over time.

    With the PockEngine training method, machine-learning models can efficiently and continuously learn from user data on edge devices like smartphones.

    Such personalized deep-learning models could be used in intelligent computer chatbots that learn how to understand a particular user’s accent, while also using smart keyboards that are continually updated to adjust and effectively predict the next word each time based on a certain person’s typing history For this reason, there is need for repeated readjustment of a machine-learning model using fresh data to achieve the customization.

    The data from the users is usually sent over to the cloud server because such devices as smartphones do not possess enough memory space and computing power for fine-tuning such a detailed process. However, data transmission consumes much power, and the possibility of confidential users’ data leakage into external cloud storage can be regarded as a danger.

    A method was created by researchers from MIT, the MIT-IBM Watson AI Lab, and other places in which deep-learning models can easily adjust to novel sensor data right at an edge device level.

    The PockEngine is their on-device training method that finds out which components should be replaced within a large machine-learning module to raise precision and store only these particular fragments. This bulk of computation is executed as the model is being prepared, thereby reducing computational overhead during runtime and increasing the speed of fine-tuning.

    Compared with other approaches, PockEngine accelerated by a factor of 15 on some platforms. In addition, PockEngine did not degrade any model in terms of accuracy. The researchers also noticed that the fine-tuning of the popular AI chatbot was able to provide accurate answers to complex questions.

    On-device fine-tuning may be useful for privacy protection, cost reduction, customization capability, and even lifelong learning but its practicability is very dubious. The resources are limited hence everything has to be done within the confines of those resources. We would like to be capable of undertaking both, training as well as inference on edge devices. According to Song Han, an associate professor in EECS, a member of the MIT-IBM Watson AI Lab, a distinguished scientist at NVIDIA, and senior author of an open-access paper detailing PockEngine, “now we can”

    The paper is co-authored by lead author Ligeng Zhu, a PhD in EECS, together with other colleagues from MIT, MIT–IBM Watson AI Lab, and the University of California San Diego. This paper has just been published at the IEEE/ACM International Symposium on Microarchitecture.

    In the realm of deep learning, models are constructed based on neural networks, comprising interconnected layers of nodes, or “neurons.” These neurons process data to generate predictions. When the model is set in motion, a process known as inference takes place. During inference, a data input, such as an image, traverses through the layers until a prediction, like an image label, is produced at the end. Notably, each layer doesn’t need to be stored after processing the input during inference.

    However, during the training and fine-tuning phase, the model undergoes backpropagation. In backpropagation, the model is run in reverse after comparing the output to the correct answer. Each layer is updated as the model’s output approaches the correct answer. Since fine-tuning may require updating each layer, the entire model and intermediate results must be stored, making it more memory-intensive than inference.

    Interestingly, not all layers contribute equally to accuracy improvement; even for crucial layers, the entire layer may not require updates. These unessential layers or parts thereof don’t need to be stored. Moreover, there might be no need to trace back to the initial layer for accuracy enhancement; the process could be halted midway.

    PockEngine capitalizes on these insights to accelerate the fine-tuning process and reduce the computational and memory demands. The system sequentially fine-tunes each layer for a specific task, measuring accuracy improvement after each layer adjustment. This approach allows PockEngine to discern the contribution of each layer, assess trade-offs between accuracy and fine-tuning costs, and automatically determine the percentage of each layer that requires fine-tuning.

    Han emphasizes, “This method aligns closely with the accuracy achieved through full backpropagation across various tasks and neural networks.”

    Traditionally, generating the backpropagation graph constitutes heavy computational work that takes place at runtime. On its side, this process takes place at compile time before the PockEngine deploys it for use.

    PockEngine discards portions of text codes to eliminate superfluous sections or parts of layers forming an uncomplicated diagram for runtime. This reduces it to a graph which it then optimizes further for effectiveness.

    As everything just has to happen only once, there is minimal computational overhead at runtime.

    “It is as if you were preparing for a hike in the woods. You sit down at home and make all arrangements—which hiking routes do you plan to take, which will you skip over,” Han explains.”

    It trained models up to 15x faster than other solutions with no impact of accuracy when the researchers ported PockEngine to deep-learning models running on different edge devices like Apple M1 Chips, digital signal processors that typically are found inside modern smartphones and Raspberry Pi In addition, fine-tuning using PockEngine was performed at a vastly reduced memory requirement.

    Furthermore, the team employed the technique on a large language model Llama-V2. In big language models fine-tuning includes multiple examples and it is important for such models to know how to talk with people, Han says. It is also crucial for the models involved in complex problem-solving and solution reasoning.

    For example, when using PockEngine to fine-tune Llama-V2 models, they successfully answered the question, “What was Michael Jackson’s last album?” In contrast, models that weren’t fine-tuned failed to provide the correct answer. With the help of PockEngine, the time for each iteration of the fine-tuning process was reduced from about seven seconds to less than one second on an NVIDIA Jetson Orin, an edge GPU platform.

    Looking ahead, the researchers aim to leverage PockEngine for fine-tuning even larger models designed to handle both text and images simultaneously.

    “This research tackles the increasing efficiency challenges posed by the widespread adoption of large AI models like LLMs across various applications in diverse industries. It not only shows promise for edge applications incorporating larger models but also has the potential to reduce the cost associated with maintaining and updating large AI models in the cloud,” says Ehry MacRostie, a senior manager in Amazon’s Artificial General Intelligence division. MacRostie, who was not involved in this study, collaborates with MIT on related AI research through the MIT-Amazon Science Hub.

    Support for this work was provided, in part, by the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, the MIT-Amazon Science Hub, the National Science Foundation (NSF), and the Qualcomm Innovation Fellowship.

  • Programmers are increasingly adopting tools powered by artificial intelligence, and this shift is reshaping their approach to coding.

    Programmers are increasingly adopting tools powered by artificial intelligence, and this shift is reshaping their approach to coding.

    Since the spotlight embraced generative AI tools like OpenAI LP’s ChatGPT and Google LLC’s Bard, the world has been captivated by their conversational abilities and impressive utility. One of their early and prominent applications is assisting software developers in writing code.

    Following ChatGPT’s surge in popularity and its remarkable conversational and writing capabilities, numerous code-generating chatbots and developer tools have flooded the market. These include GPT-4, the same model behind the premium ChatGPT Plus service, Microsoft Corp.’s GitHub Copilot coding assistant, Amazon Web Services Inc.’s Code Whisperer, and others.

    According to a June survey by Stack Overflow involving over 90,000 developers, almost 44% reported using AI tools in their work, with an additional 26% expressing openness to using them soon. The tools were primarily employed for code production (82%), debugging and assistance (48%), and learning about specific codebases (30%).

    To understand how developers are integrating AI tools into their daily work, SiliconANGLE interviewed several developers. Two major coding tool modes have emerged: chatbots like ChatGPT that generate code snippets or check code, and coding assistants like GitHub Copilot that provide suggestions as developers write.

    For instance, Oliver Keane, an associate web developer at Adaptavist Group Ltd., uses ChatGPT as a coding companion to swiftly establish code foundations for projects. Keane provided an example of creating a content management system, where GPT was asked to generate a method for admins to update FAQs. He highlighted that, although he could code it himself, using GPT results in a solution 80% of the time on the first attempt.

    However, Keane emphasized the need for upfront work, as chatbots like ChatGPT require a conversational history or context to function effectively. Developers must build a rapport by prompting the tool with relevant code until it understands the project, after which it provides better responses.

    Once trained, developers can use these models for code reviews, bug discovery, and revisiting old work, essentially turning them into “pair programmers” for discussions on improving old code. Keane noted that, while the AI tool meets his needs 80% of the time, refining prompts can enhance its performance.

    Chatbots, besides aiding in coding tasks, also prove valuable for learning new frameworks and coding languages. Well-trained language models can serve as effective tutors, rapidly bringing developers up to speed, often surpassing traditional learning resources due to their conversational nature and interactive code discussions.

    AI code completion tools like GitHub Copilot have been a revelation for me and many other developers. They’re not chatbots – they’re like having an extra pair of hands at the keyboard. Copilot helps me write better code, even though I’m not writing most of it myself.

    The way this works is that Copilot suggests code snippets as I type. These snippets are often exactly what I need, and they force me to write my own code in a more descriptive and precise way. This is because if my code is too vague, Copilot will either go haywire or suggest irrelevant code.

    In the beginning, I found that Copilot would only suggest short stints of code, like a few lines at a time. But as I got used to using it, Copilot started suggesting longer and longer snippets, sometimes even 20 lines or more. This has saved me a lot of time and effort, and it’s given me more time to focus on other parts of my job, such as marketing and business development.

    In a recent survey of Stack Overflow users, 37% of professional coders said that the main benefit of using AI code completion tools is improved productivity. Another 27% said that greater efficiency is the main benefit, and 27% said that speed of learning is the main benefit.

    I can definitely see why productivity is the primary benefit for all types of developers. AI code completion tools can save you a lot of time and effort, which can free you up to focus on other things. And if you’re just learning to code, these tools can help you learn faster and more effectively.

    Overall, I’m a big fan of AI code completion tools. They’re not perfect, but they’re a valuable tool for any developer.

    While these AI tools can be incredibly helpful for developers, they’re not without their challenges. For instance, large language models may sometimes produce misinformation or “hallucinate,” generating problematic code. When using tools like chatbots, bugs might be caught by the compiler or a seasoned coder during a code review. However, code-suggesting AIs can introduce similar issues, requiring additional time to understand what went wrong after the fact.

    Reeve, for example, shared his experience with anxiety when GitHub Copilot generated a substantial amount of code at once. While it initially seemed like a time-saving boon, a bug emerged hours later due to a minor mistake in the AI-generated code. This highlights a certain level of uncertainty when the tool anticipates too far into the future.

    According to a Stack Overflow survey, only about 42% of developers trust AI models, with 31% expressing doubts and the rest having more serious concerns about the outputs. Some AI models may also be more prone to hallucinations than others.

    On the flip side, these tools enable rapid code production in ways not seen before, potentially sparing developers from tedious tasks. However, there’s a concern that overreliance on AI may hinder newer developers from fully grasping the fundamentals of programming languages.

    Jodie Burchell, a developer advocate at JetBrains, emphasizes that AI coding tools should be viewed as tools and assistants. Developers remain responsible for ensuring that the code aligns with their intentions, even if the AI provides imperfect guidance. Burchell underscores the importance of critical thinking, stating that there’s no shortcut to simply letting models develop code without scrutiny.

    The 2023 Accelerate State of DevOps Report from Google’s cloud division suggests that while AI tools slightly improve individual well-being, their impact on group-level outcomes, such as overall team performance, is neutral or even negative. This mixed evidence is attributed to the early stages of AI tool adoption among enterprises.

    Despite potential challenges, more developers are expressing interest in incorporating AI tools into their workflows, as indicated by the Stack Overflow survey. This trend is particularly notable among developers learning to code, with 54% showing interest compared to 44% of professionals. Veteran coders, on the other hand, tend to be less enthusiastic about adopting these new AI tools.

    In the realm of generative AI developer tools, it’s still early days, but adoption is progressing rapidly. Companies like OpenAI and Meta continuously enhance their models, such as GPT-4, Codex, and Code Llama, for integration into more tools. As these models evolve and become more ingrained in the development process, developers may find themselves spending a significant portion of their coding time collaborating with AI tools. Learning to provide effective prompts, maintaining precision coding for guiding predictive algorithms, and understanding the models’ limitations will likely be crucial for navigating this AI-centric future.

  • OpenAI: A shining example of what humans can achieve when they work together.

    OpenAI: A shining example of what humans can achieve when they work together.

    Five days after he was suddenly sacked, Sam Altman is going back to his old job as the boss of OpenAI, creator of the chatbot ChatGPT.

    Out of all the emojis used in my work messages today, an exploding head wins so far.

    Up till now, we don’t even have an idea why he was fired anyway. However, the management of Openai found one thing very disturbing they made an effort to eject him by using speed and discretion to ensure that people hardly knew about it.

    In a statement they implied he had somehow not been honest with them: accusing him of not always saying the truth as it is in his messages.

    However, with all these things at stake and the nomination of another two chief executive officers (CEO) within less than 24 hours afterward, an incredible outburst of support for Mr. Altman surfaced from among workers within the company — and has been very effective. All but one, every staff signed this letter saying they might resign if Mr Altman was not recalled. One of the signatories was chief scientist Ilya Setskov, who was part of the board that approved the initial action. Later, he tweeted to X (formerly, twitter) that he was sorry for having a hand in Mr. Altman’s exit.

    The exploding head emojis have taken over my work chats this morning.

    It is still unclear why he was dismissed in the first instance. However, OpenAI’s Board of Directors was so shocked by something that they took extreme measures to kick him out because they acted swiftly and secretly, sharing the information with a few people only.

    In a statement they implied he had somehow not been honest with them: labelling him a liar who does not communicate consistently and candidly.

    However, considering the gravity of everything above, and the appointment of two new CEOs in less than 24 hours one would imagine something opposite. However, there has been the most incredible outbreak of backing on the part of the same company for Mr Altman – and it is working out just fine. This is to say that all but one staff member signed a collective resignation letter and threatened to leave their jobs unless Mr Altman was not recalled. One of the signing boards was Chief Scientist Ilya Setskover, who is among the others. Thereafter, he published on X (formerly called Twitter) that he was sorry for causing the demise of Mr Altman.

    OpenAI’s week was a rollercoaster of emotions, with the company’s future hanging in the balance. It all started when Mira Murati, the interim CEO, posted a message on X saying “OpenAI is nothing without its people.” This was followed by a flurry of similar messages from other employees, accompanied by a sea of coloured hearts. The whole thing had an eerie resemblance to last year’s big Silicon Valley meltdown when Twitter staff sent a coded message of saluting emojis after Elon Musk’s takeover.

    Sam Altman, OpenAI’s co-founder, didn’t waste any time. By Monday, he had accepted a new job at Microsoft, OpenAI’s biggest investor. This move fueled speculation that Microsoft was effectively taking over OpenAI without staging a formal takeover. In the space of just two days, Microsoft had hired Altman, OpenAI co-founder Greg Brockman, and hinted at taking on other OpenAI employees who wished to join them.

    But in a surprising turn of events, Altman is back at OpenAI. This move is a testament to his power and influence, as he was able to walk back into the company despite having just accepted a job at its biggest investor. It’s also a sign of Microsoft’s commitment to OpenAI, as they were willing to let Altman return to the company without any strings attached.

    Only time will tell what the future holds for OpenAI. But one thing is for sure: the company is in for an exciting ride.

    What sense in this whole soap opera?

    We might be talking about the creators of pioneering technology but there are two very human reasons why we should all care: money and power.

    Let’s start with money. Last month, OpenAI was estimated to be worth $80n It seems like investment money keeps on coming in. However, its costs are also expensive. I was told that this involves a huge amount of calculating power and only a couple of pies each time somebody types in a question into the chatbot.

    Last month, a researcher in the Netherlands said to me that even the super-rich Google cannot afford to provide queries on its search engine at the same rate as chatbot queries because the cost is beyond its limits.

    Therefore, it does not come as a shock that OpenAI turned to money-generating endeavors to ensure its investors remain happy.

    And with money comes power. Despite this popcorn-grabbing distraction, let’s not forget what OpenAI is doing: building a technology that would in turn shape its world. AI is progressing very fast becoming more mighty every day, which only increases its potential danger. Just recently, Mr. Altman stated that the next version of ChatGPT for 2023 will be outdated when compared to the new version from OpenAI scheduled to launch shortly. Moreover, it is known that the current version can already pass the bar exam which newly licensed lawyers must

    Few people are driving this uncommon wave of disruption with Sam Altman being one of them. If it succeeds, then AI will be smarter and more efficient if all goes well; otherwise, there will be destruction. However, as these last five days have proven, innovation remains anchored on people’s power. Humans also can err. beginnetjeсыль: And so do the humans still have power of failing.

    I’ll leave you with my favorite comment about the whole debacle: “People building [artificial general intelligence]unable to predict consequences of their actions three days in advance”, wrote quantum physics professor Andrzej Dragan on X.

  • Microsoft is willing to match the salaries of all OpenAI employees.

    Microsoft is willing to match the salaries of all OpenAI employees.

    Microsoft has offered to match the pay of any staff who join it from crisis-ridden OpenAI.

    Sam Altman was unexpectedly removed from his position as CEO on Friday, prompting an offer from Microsoft for him to lead a new advanced AI research team. This decision has sparked significant discontent among OpenAI staff, with the majority threatening to resign unless Altman and co-founder Greg Brockman are reinstated.

    The situation regarding Altman’s potential move to Microsoft remains uncertain, even though Microsoft is OpenAI’s major investor. Evan Morikawa, an engineering manager at OpenAI, disclosed that 743 out of 770 employees have signed a letter urging the board to resign, with staff indicating their intention to leave if their demands are not met. Microsoft’s Chief Technology Officer, Kevin Scott, confirmed that job offers had been extended to OpenAI staff, assuring them that Microsoft would match their current compensation.

    The ambiguity also extends to Altman’s future, as Microsoft CEO Satya Nadella mentioned on CNBC that Altman might not join, expressing commitment to OpenAI and Altman regardless of the outcome. Nadella emphasized openness to various possibilities depending on the decisions of OpenAI staff.

    Nadella acknowledged that change is necessary at OpenAI and expressed a willingness to engage in discussions with their board about governance. Despite Microsoft’s substantial investment and extensive use of OpenAI’s technology, the company does not currently have a seat on OpenAI’s board.

    While OpenAI faces internal turmoil, Microsoft’s approach has been viewed positively by some in the industry. Nathan Benaich, a partner at Air Street Capital, commended Nadella’s decisions, describing them as “fantastic deal-making” and positioning Microsoft as a formidable force with the leadership of OpenAI, talent attraction capabilities, a robust balance sheet, and significant investments in computing infrastructure.