Category: Artificial Intelligence

  • Head of AI Resigns Amid Dispute Over ‘Exploitative’ Copyright Issue

    A senior executive at the tech firm Stability AI has resigned over the company’s view that it is acceptable to use copyrighted work without permission to train its products.

    Ed Newton-Rex used to lead the audio team at a company based in the UK and US. He recently stepped down, telling the BBC that he finds it “exploitative” for AI developers to use creative work without getting permission.

    Many big AI companies, like Stability AI, argue that using copyright content is “fair use.” This means they think they can use original content without needing permission from the owners.

    Right now, the US copyright office is looking into generative AI and policy issues.

    Newton-Rex wants to make it clear that he’s talking about all AI companies that share this view – and most of them do. In response to a former colleague on Twitter, the founder of Stability AI, Emad Mostaque, said the firm believes fair use “supports creative development.”

    AI tools get trained using a ton of data, often taken from the internet without permission. Generative AI, which creates things like images, audio, video, and music, can then make similar material or even mimic the style of a specific artist if asked.

    Newton-Rex, who is also a choral composer, wouldn’t be keen on giving his own music to AI developers for free. He believes that many creators produce content with the hope that their copyright will be valuable someday. However, without permission, their work is being used to create competition and might even replace them.

    Newton-Rex previously built an AI audio creator for his old company called Stability Audio. He chose to license the data used for training and share revenue with rights holders. He admits that this approach might not work for everyone.

    He remains optimistic about the benefits of AI and doesn’t plan to leave the industry. He hopes that globally, people will adopt an approach that says, “you need to get permission from the people who wrote it; otherwise, it’s not okay.”

    Using copyright material to train AI tools is a hot topic. Some creatives, like comedian Sarah Silverman and Game of Thrones writer George RR Martin, have taken legal action against AI firms, claiming their work was taken without permission.

    Earlier this year, a track featuring AI-generated voices of music artists Drake and The Weeknd got removed from Spotify after it was discovered that it was created without their consent. However, the boss of Spotify later said he wouldn’t completely ban AI from the platform.

    Stability AI faced legal action from the Getty image archive, accusing them of scraping millions of pictures for training their AI image generator, Stable Diffusion. Some news organizations, including the BBC and The Guardian, have blocked AI firms from using their material from the internet.

  • ServiceNow is growing its Now Assist family of generative AI agents.

    ServiceNow, a major player in information technology workflow management, announced today an expansion of its Now Assist family of generative artificial intelligence agents.

    The new additions, including Now Assist in Virtual Agent, flow generation, and Now Assist for Field Service Management, are now accessible to all customers through the ServiceNow Store. These enhancements aim to reduce the time employees spend searching, summarizing, and creating basic information, providing an immediate boost to productivity.

    ServiceNow’s cloud platform is widely used by IT and human resources teams to handle day-to-day tasks, such as addressing employee support requests and managing cybersecurity risks. The platform’s applications extend beyond IT, reaching areas like customer service ticket processing.

    In recent months, ServiceNow has been actively incorporating new generative AI capabilities across its product portfolio, particularly focusing on the Now Assist family of AI agents.

    These agents are integrated with various modules, including IT Service Management, Customer Services Management, Human Resources Services Delivery, and Workflow Creator, prioritizing accuracy and data privacy.

    The introduction of Now Assist in Virtual Agent allows ServiceNow customers to create and deploy generative AI chat experiences quickly, leveraging enhanced guided setup capabilities.

    This enables the creation of customer and employee service agents offering superior conversational experiences, pulling the latest information from internal knowledge bases or service catalogs.

    The Flow generation tool, a new addition, assists administrators and developers in creating workflow blueprints for faster development at scale. It converts plain text into low-code workflows, eliminating the need for manual flow automation creation.

    Developers can then make continuous adjustments and refinements using a no-code design interface in the ServiceNow App Engine, expediting automation rollouts.

    Now Assist for Field Service Management focuses on simplifying the work order process by using generative AI to summarize work order tasks based on all activities and incidental data. This is particularly beneficial for field technicians relying on mobile devices as they move between sites to complete their work.

    While it may seem that ServiceNow is joining the generative AI trend, the company emphasizes that it has invested several years in developing and integrating AI into its platform. Instead of relying on existing generative AI models, ServiceNow has built its own domain-specific Now large language model.

    Trained on its workflows and data, this model is customized to perform specific tasks within the platform, ensuring a better user experience, transparency, and superior governance and data security.

    ServiceNow President and Chief Operating Officer CJ Desai highlighted the role of AI in achieving faster execution and smarter decision-making, positioning ServiceNow at the forefront by intelligently integrating generative AI into the core of the Now Platform.

    This integration allows organizations to harness AI securely and confidently for accelerated business value.

  • Salesforce is working to transform customer service experiences using advanced AI technology that generates solutions.

    Today, Salesforce announced a new addition to its toolkit for customer service—the Service Intelligence application. This application, integrated into its flagship Service Cloud platform, is fueled by Salesforce’s powerful Data Cloud. It empowers service agents and managers by consolidating all necessary information for customer support, eliminating the need to navigate through various screens.

    Service Intelligence aims to facilitate quicker, more informed decisions for customer service professionals, ultimately enhancing customer satisfaction and providing a competitive advantage. The application features customizable dashboards offering a snapshot of critical metrics like customer satisfaction and workloads.

    Managers can efficiently oversee teams, respond promptly to challenges, and identify trends using the Einstein Conversation Mining capability, which analyses customer interactions.

    One notable aspect of Service Cloud is the Einstein Conversation Mining feature, utilizing AI to swiftly identify trends in customer conversations. For instance, if a surge in inquiries about a product’s return policy is detected, AI customer service bots can be trained to address these concerns efficiently.

    Additionally, Service Intelligence allows users to seamlessly transition from the dashboard to data exploration visualizations in Tableau Software, maintaining the same data context. This facilitates sharing crucial insights more effectively.

    While the current features are available, Salesforce plans additional updates, including the integration of Einstein Copilot in Service Intelligence. This will enable users to interact with Einstein, a generative AI assistant, for insights and metrics in a conversational manner.

    Another upcoming addition is Einstein Studio, providing quick access to additional insights such as predicting customer escalations and issue resolution times.

    A new feature called Customer Effort Score is also on the horizon. This tool will provide a comprehensive view of the difficulty of each customer’s service experience, along with recommendations for improving satisfaction, such as offering discounts in challenging situations.

    Salesforce’s ongoing efforts in AI, exemplified by Service Intelligence, underscore its commitment to enhancing customer interactions. The company has consistently introduced new AI features across its various platforms, with a significant focus on research and development in the AI domain.

    Analysts, such as Rebecca Wetterman, emphasize that success in the customer service industry relies on making interactions easy and efficient, a goal that AI can significantly contribute to.

    Salesforce’s AI initiatives have been a continuous effort, involving collaborations with industry leaders like IBM. The overarching aim is to help customers seamlessly adopt and benefit from generative AI tools across different business functions, including sales, marketing, customer service, finance, and more.

  • Microsoft just introduced some cool new AI tools, tailor-made for both Windows developers and IT professionals. It’s like they’ve got a whole set of shiny, fresh tricks up their sleeve to make everyone’s job a bit easier and more exciting.

    Microsoft debuts new AI tools for Windows developers and IT professionals.

    Microsoft is really into bringing artificial intelligence to your Windows 11 experience! They just spilled the beans at their Ignite 2023 conference about some updates and cool stuff for both developers and their cloud-based Windows 365 service.

    In the recent Windows 11 update on September 26, Microsoft didn’t hold back. They added a bunch of features for developers, all aimed at making AI a big deal in the operating system. The idea? To make the developer experience super productive. And guess what? They’re rolling out new tools to help developers kickstart their AI projects on Windows, especially for creating and using conversational experiences. It’s like they’re giving developers the keys to the AI kingdom!

    With regards to AI app development made simpler with the launch of its successor, Windows AI Studio, one can now get access to AI development tools and models made available through Azure AI Studio announced during Ignite, as well as models from other distributors including Hugging Face Inc. whereby developers can

    Instead of large language models like OpenAI LP’s GPT-4, small language models (SLMs) have been designed, which are lightweight and provide a solution for specific industry-focused or business purposes. On the contrary, LLMs are supposed to be generic and occupy the processors’ cycles extensively; as such, it is costly to implement them, hence they normally sit in the cloud and execute over an array of Graphics Processing Units (GPUs). Additionally using an SLM implies that all data processed is done locally therefore no data exits giving them more secure.

    Developers get guided workspace setup for model configurations while fine tuning of the popular SLM’s like Phi, llama 2 and Mistral in AI studio. Developers can then quickly cycle through different models and refine them by accessing Prompt Flow and gradio template right from the work space.

    The company is also going to release a Visual Studio Code extension in the upcoming weeks that would make it easy for devs who wish to start on AI app development to integrate their models into different Windows applications. It also comes with a guided interface in the form of a workspace that helps developers concentrate on developing and enables the Studio which prepares the environment automatically and offers AI tools.

    AI Studio is about to get even cooler with new capabilities. Soon, it will bring you a bunch of fancy models designed specifically for Windows GPUs. First up on the list are Llama2-7B, Mistral-7B, and Stable Diffusion XL.

    Now, let’s talk about Microsoft’s Copilot AI-assistant. It’s like this super-smart helper integrated into Windows 11, the Edge web browser, and Bing search. It can chat with you, help out with tasks in Windows, write documents in Word, crunch numbers in Excel—basically, it’s your AI sidekick.

    But that’s not all. Developers, listen up! You’ll soon have the power to create and use small language models on your own turf. This means you can build custom AI-powered apps that chat with users, handle questions, automate tasks, and solve problems, all tailored to specific situations.

    Oh, and there’s more good news. Microsoft is giving some love to Windows 365, the cloud-based OS service for Windows 11 or Windows 10, and Azure Virtual Desktop. Admins, get ready for an easier time deploying desktops and systems to both in-office and remote workers.

    Windows 365 lets you launch a full operating system from anywhere on any device—MacBooks, iPads, Linux devices, Android phones—you name it. It’s like having your computer in the cloud. And for developers, there’s a Dev Box for creating personalized dev environments in the cloud, accessible from any device.

    Azure-based Windows Virtual Desktops take a different approach, providing virtualized apps and remote desktops for Windows 10 and Windows 11.

    Now, for some exciting previews. Windows 365 is gearing up for graphical processor unit support, perfect for creative tasks like graphic design and 3D modeling. There’s also a sneak peek at AI capabilities for Cloud PC resources, offering recommendations to save costs and boost efficiency.

    Security buffs, Microsoft’s got your back. Single-sign-on and passwordless authentication support is ready to roll for Windows 365 and Azure Virtual Desktop. They’re also adding features like blocking screen captures, protecting against tampering, and even a lockbox capability to keep your data safe from prying eyes.

    And get this, soon organizations can encrypt their Windows 365 Cloud PC instance disks with their own encryption keys. This means top-notch security for your data while it’s resting in the cloud. Microsoft’s making sure your information stays safe and sound.

  • Microsoft is supercharging the development of artificial intelligence with enhancements to Fabric, Azure AI Studio, and updates to its databases.

    Microsoft Corp. is raising its game for artificial intelligence developers with a host of updates to its Azure cloud computing platform.

    During its Ignite 2023 conference, Microsoft highlighted that the key to making artificial intelligence (AI) successful is having access to the right data. They introduced Microsoft Fabric as part of the Intelligent Data Platform, a tool that combines different databases, analytics, and governance services. This helps companies seamlessly connect their crucial data with their AI processes. Additionally, Microsoft unveiled Azure AI Studio, which comes with improved AI search features and content safety measures.

    According to Jessica Hawk, Azure’s corporate vice president of data, AI, digital applications, and marketing, every AI application has its foundation in data. Therefore, a company needs a complete data estate to support artificial intelligence innovation; however, this is not easy to develop if there are fragmented data systems.

    The company claims that, Microsoft Fabric, which was announced in May at Build 2023, is a fabric comprising the essential data and analytics software for developing leading-edge AI apps and systems together. In other words, the new software as a service solution combines different platforms, such as Data Factory, Synapse and Power BI, in the form of one product that replaces these disparate systems with an easy to install, use and administer, as well as cost-efficient alternative aimed at However, Microsoft already provides a full end-to-end AI development platform named Azure OpenAI, with which one can generate different types of sophisticated next generation AI applications. This procedure, however, must be backed by an uninterrupted stream of clean datum coupled with assimilated knowledge.

    The company explained that with the provision of Fabric; the developer will have a full-fledged data platform instead of the confusing maze of the unrelated equipment. All necessary abilities to derive meaning from data and share it with AI systems are present in this platform; moreover, it’ll add its own copilot tool in preview that will allow programmers communicating via natural language commanding.Microsoft Fabric is like the boss of an open data lake platform called OneLake. This platform keeps everything in one place, so there’s no need to mess around with moving or copying data. With OneLake, Fabric makes sure that data is properly managed and has a pricing system that grows as you use it more. Plus, it’s open, meaning you’re not stuck with one company’s way of doing things. Fabric also plays nice with Microsoft 365 apps like Excel, Dynamics 365, and Microsoft Teams. It’s all about making things work together smoothly.

    Fabric Microsoft also rolled out a public preview of Azure AI Studio to aid AI developers with every tool required in dealing with generative AI models. This facilitates the use of the latest large language models and also integrates data to enable retrieval augmented generation (RAG), which refers to AI systems that make use of company’s private datasets in intelligent search, full-cycle model management and content safety tools. However, Prompt Flow remains the primary function of Azure AI Studio, which helps in managing prompt orchestration and LLMOps. Azure Machine Learning service has also introduced a new feature called Prompt Flow that makes it possible to prototype, experiment, and iterate, as well as deploy AI applications.

    Another tool which is offered under Microsoft Azure AI Studio includes Azure AI Search that was previously called Cognitive Search. This allows for quicker and more accurate data retrieving in order to enhance the quality as well as response times of AI. This approach known as retrieval augmented generation lets LLMs to integrate information from various external sources and consequently develop better answers.

    RAG has a critical significance in some types of cases where an agent’s customer service demands valid responses drawn from trustworthy information. Powered by vector search an information retrieval method across diverse data forms like images, audio, text and videos. Therefore, vector search is one of the crucial functions for obtaining fuller information on AI.#

    Semantic ranker in Azure AI search enables one to utilize the same search re-ranking technologies that underlie Microsoft Bings searches to rank responses based on relevance.

    Azure AI Content Safety gives developers tools to check how well their models are doing, making sure they’re accurate and free from biases or other security risks. It comes with cool features like the Responsible AI Dashboard and model monitoring. Microsoft also promises to have the back of Azure customers, protecting them from legal troubles related to copyright infringement. This commitment, called the Customer Copyright Commitment, applies to everyone using the Azure OpenAI Service. It’s basically Microsoft saying, “We’ve got your back if anything goes wrong, and we’re confident in our ability to keep your AI development safe.”

    Microsoft has some really big news for AI developers, and there’s even more coming their way. Picture this: GPT-4 Turbo with Vision is on the horizon for Azure OpenAI Service and Azure AI Studio. It’s like the superhero of language models, brought to you by Microsoft’s partner OpenAI LP. This powerhouse gives developers the keys to unlock multi-modal capabilities in their AI applications. Translation: AI that can not only understand language but also see and make sense of things, whether it’s in the real world, videos, or elsewhere.

    But wait, there’s more! The Azure Cosmos DB database service is levelling up, aiming to be the go-to choice for AI developers. Now, it’s all about dynamic scaling – the ability to flexibly adjust database sizes based on demand. This ensures developers are using just the right amount of cloud resources, optimizing their costs while rocking the AI world.

    Microsoft have also added more features to this version like; availability of Azure Cosmos DB for MongoDB vCrore and vector search in Azure Cosmos DB for MongoDB vCrore. This gives developers the capability of creating smartly functioning applications which utilizes data that is saved in Azure Cosmos DB using complete support for MongoDB. This will translate into more native Azure integration, which will also lead to the total cost of ownership reduction for MongoDB developers with the same familiar database service experience.

    It would allow searching of the unstructured data in Azure Cosmos DB using vector search technique and storing these data as embeddings. This helps in avoiding the hassle of data transmission from the platform to a dedicated platform having a vector search functionality.

    Besides, Microsoft is improving Azure Kubernetes Service, a controlled form of open source Kubernetes program that manages software containers for the parts of AI apps.

    The company said that AKS now supports specialized machine learning workloads such as LLMs with virtually no overhead to provision them ready for operations. The updated AI toolchain operator for LLMs in AKS scales across CPU and GPU cluster node.

    It does this by choosing the best infrastructure for each model out of the options that exist in the market. Thus, developers can allocate inferencing costs among a few low-GPU count virtual machines.… This helps lower costs and increase coverage regions for LLM workload. Moreover, elimination of wait times associated with higher GPU count VMs is included. The company further added that customers will as well be able to use preset models which are hosted on AKS thus shorten the setup time of the inference service.

    Guess what’s new in AKS? They’ve introduced the Azure Kubernetes Fleet Manager, and it’s like the superhero for managing workload distribution across Kubernetes clusters. This means it helps make sure everything is running smoothly and keeps your platform and applications up to date. Developers can breathe easy knowing they’ve got the latest and most secure software running the show.

    And that’s not all! Microsoft just dropped the latest version of its .NET computing framework, and it’s called .NET 8. This update is a game-changer for Windows, Linux, and macOS operating systems. It promises a big leap in performance and productivity for ASP.NET and .NET developers, setting the stage for a whole new era of super-smart cloud-native applications that play nice with Azure.

    Hold on, there’s more goodness. .NET 8 comes with fancy features like support for Azure Functions, Azure App Service, AKS, and Azure Container Apps. They’ve jazzed up Visual Studio, their integrated development environment, and made friends with GitHub and Microsoft DevBox. It’s like Microsoft is rolling out the red carpet for developers!

  • YouTube will provide a feature allowing users to flag songs created by AI that imitate the voices of real artists.

    Record companies will be able to request removal of content that mimics an artist’s ‘unique singing or rapping’.

    YouTube has rolled out new rules. Now, if music companies don’t like songs that use computer-made versions of singers’ voices, they can ask YouTube to take those songs down.

    YouTube is bringing in a tool that lets music labels and distributors point out content that copies an artist’s voice. This is happening because of the rise in fake songs made by computers. This tech, called generative AI, can make text, images, and even voices that seem very real.

    For instance, there’s a song called “Heart on My Sleeve” that claims to have vocals made by AI imitating Drake and the Weeknd. It got taken off other music platforms when the record company, Universal Music Group, said it broke the rules by using generative AI. But, you can still find the song on YouTube.

    In their blog, the platform owned by Google stated that they will first test-run these new controls among some selected record companies and independent labels before the final deployment can be effected with every label or company. Youtube indicated that this small group were also involved in undetailed early AI music experiments, which involves using generative AI technologies to produce material.

    In addition, a new update of YouTube’s privacy complaint process will enable citizens to file complaints against deep fakes.

    Accordingly, the platform claimed, “‘We will allow requests for takedown of AI-created or other synthetic/manipulated content appearing to depict an identifiable person, such as their face or voice’, except may be parodies and celebrity depictions”.

    Nevertheless, not all content would be pulled off Youtube and our evaluation of such requests would take into account many things – Jennifer Flannery O’Connor and Emily Moxley; two product management VP’s from youtube blogpost.

    Similar to YouTube, creators will be required to declare that they used realistic looking “manipulated of synthetic” content, including computer generated products. Creators will have a chance to flag synthetic content when it is uploaded. However, repetitive noncompliance with the rules would lead to removal of content and suspension of advertising payments on YouTube.

    In a blog post, the Google-owned platform said that it will trial this control to chosen number of labels and distributors before going for global deployment. The select group was also involved in unrevealed “early AI music experiments” involving generative tools for producing contents.

    The latest update on the privacy complaint process, YouTube will also let people register complaints about deepfakes.

    The platform further said, “we will enable users to demand the take down of synthetic or manipulated content that purports to represent one specific person, whether through artificial intelligence (AI) generation or otherwise. This includes instances of parody or deepfakes relating to a public figure, politician, famous

    However, not everything that will be removed from youtube and we will consider various aspects, such as compliance with local laws or whether the content is inappropriate, when dealing with this kind of demands”.

    You tubers will also be asked to declare whenever they create “manipulated or synthetic” content that resembles real life. Uploading of content, creators will have an option of flagging synthetic footage. YouTube would warn about persistently violating these guidelines as it might lead to deletion or suspension of payment for advertisements.

    YouTube has come up with new rules for artificial intelligence, and they’re especially crucial when it comes to talking about sensitive stuff like elections, ongoing conflicts, public health crises, or public officials.

    Now, if a label marks a video as AI-generated, you’ll see that info in the video’s description. But, if the content talks about sensitive topics, there will be a more noticeable label. If any AI-made content breaks the existing rules, like making a violent video just to shock people, YouTube will take it down.

    Last week, the big company that owns Facebook and Instagram, called Meta, also said that political advertisers on their platforms have to admit when they’ve used AI in ads. They want transparency, so if a picture, video, or sound is used to make it look like someone is saying or doing something they didn’t, advertisers have to tell you.

    In the UK, the government is worried about deep fakes, which are fake videos or audio clips that can mess with the information we get and make us trust things less, especially during elections. Just last week, there were fake audio clips going around on social media, pretending to be the mayor of London, Sadiq Khan, saying things he didn’t actually say about Armistice Day.

  • “Concerning”: Australian researchers create believable misinformation about vaccines and vaping using AI.

    Experiment produced fake images, patient and doctor testimonials and video in just over an hour.

    It’s quite unsettling: a recent experiment had researchers delving into the world of artificial intelligence to craft over 100 blog posts spreading health-related misinformation across various languages. The results were so concerning that the researchers are now urging for stricter accountability within the industry.

    What happened was, two individuals from Flinders University in Adelaide, without any specialized knowledge in artificial intelligence, took to the internet to figure out how to bypass the safety measures in AI platforms like Chat GPT. Their goal? To churn out as many blog posts containing false information about vaccines and vaping in the shortest time possible.

    Remarkably, within just 65 minutes, they managed to create 102 posts targeting different groups, including young adults, young parents, pregnant women, and those with chronic health conditions. These posts weren’t just text; they included fabricated patient and clinician testimonials and references that looked scientifically legitimate.

    But that’s not all. The AI platform also whipped up 20 fake but realistic images to go along with the articles in less than two minutes. These included pictures depicting vaccines causing harm to young children. To top it off, the researchers even produced a fake video connecting vaccines to child deaths, and it was available in over 40 languages.

    The experiment serves as a stark reminder of the potential misuse of AI technology and highlights the need for increased responsibility and safeguards in the industry.

    On Tuesday, a study was published in a US journal entitled JAMA Internal Medicine. One such concern regarding AI is disinformation or deliberate dissemination of unreliable, false or purposively misleading data as it was said by leader editor, a pharmacist Bradley Menz.

    As he put it, it aimed at testing how difficult it is to circumvent security measures with intention on creating deceitful health related content on a wide basis.

    As a result, Menz expressed his opinion: “we would suggest the developers of such platforms for health professionals, or the members of the public themselves who would be sharing their experiences through them. We think, they should have a mechanism to indicate an alarming material generated on it. Once a health

    Despite being an old study, this was done by Prof Matt Kuperholz (a former Chief Data Scientist with PWC). It took “one minute” to verify and the author was able to recreate this result.

    “The inherent problems that are there are not going to be solved by any kind of internal safeguard”, as stated by Kuperholz, an Australian researcher based at Deakin University.

    A new research finding has been featured in the US journal JAMA Internal Medicine. Artificial Intelligence (AI), being one of the worries the lead author Bradley Menz said, creates intentionally misleading or fake information.

    He stated that the researchers wanted to find out how simple or difficult breaking the system safeguards against such actions would be. Subsequently, the study was revealed publicly to declare that many of the currently free online tools can create misleading health information at an alarming pace.

    Menz noted that the concerned parties like medical practitioners or members of the community should be allowed to provide information about such matters. They must also have feedback from the developers.

    According to Prof Matt Kuperholz, who is an ex-chief data scientist with PwC, even though there have been improvements in AI platforms as well as stronger security measures since the study, I could still obtain similar results within one minute.

    According to Kuperholz, a researcher at Deakin University, no level of inbuilt safeguards will prevent misuse of the technology.

    Kuperholz advocates for a comprehensive approach to combat disinformation, emphasizing accountability for content publishers, whether they are social media platforms or individual authors. He also suggests the development of technology capable of evaluating the output of various artificial intelligence platforms, ensuring the reliability of information.

    He articulates the need to transition from a state of implicit trust in what we encounter to establishing a trust-based infrastructure. Rather than simply blacklisting untrustworthy tools, he proposes a whitelist approach, endorsing tools and information deemed trustworthy.

    Acknowledging the immense positive potential of these technologies, Kuperholz emphasizes that their constructive applications should not be overlooked.

    Adam Dunn, a professor of biomedical informatics at the University of Sydney, challenges the alarmist tone of Menz’s study findings. Dunn argues against unnecessary fear, noting that while artificial intelligence can indeed be employed to create misinformation, it is equally capable of generating simplified versions of evidence. He cautions against assuming that the existence of misinformation online directly translates to a shift in people’s behaviour, emphasizing the lack of substantial evidence supporting such a correlation.

  • A survey discovered that AI-generated images of people with white faces are more realistic than photographs.

    Photographs were seen as less realistic than computer images but there was no difference with pictures of people of colour.

    Imagine a situation right out of a movie – technology so advanced that it not only sounds more real than actual humans but looks even more convincing. Well, it seems that moment is already here.

    A recent study discovered something quite surprising: people are more likely to mistake pictures of AI-generated white faces as real humans compared to actual photographs of people.

    The researchers reported, “Remarkably, white AI faces can convincingly pass as more real than human faces – and people do not realize they are being fooled.”

    This research, conducted by a team from Australia, the UK, and the Netherlands, has significant implications in the real world, especially in areas like identity theft. There’s a concern that individuals might be deceived by digital impostors.

    However, the study found that these results didn’t apply to images of people of colour. This discrepancy might be because the AI faces were predominantly trained on images of white individuals.

    Dr.Zak Witkower, one of the researchers from the University of Amsterdam, pointed out that this could have consequences for various fields, from online therapy to robotics. He mentioned, “It’s going to produce more realistic situations for white faces than other race faces.”

    The team warns that this situation could lead to confusion between perceptions of race and perceptions of being “human.” Moreover, it might perpetuate social biases, impacting areas like finding missing children, as AI-generated faces play a role in such efforts.

    The study, published in the journal Psychological Science, details two experiments. In one, white adults were shown AI-generated white faces and real human white faces. Participants had to determine whether each face was AI-generated or real and rate their confidence.

    Results from 124 participants showed that 66% of AI images were considered human compared to 51% of real images. Interestingly, this trend did not apply to people of colour, where both AI and real faces were equally judged as human. The researchers also noted that participants’ own race did not affect the results.

    In another study, subjects were requested to provide subjective evaluations of AI and human faces for 14 parameters (e.g., age or symmetry) in a blinded setup such that they had no knowledge which face was computer generated or not.

    The team’s analysis was based on 610 participants whose responses implied that individuals are likely to mistake AI for a human if they had more proportionate face, familiarity with it, or low memorability levels.

    However, in a somewhat ironic twist of fate, the research team managed to create an automated deep learning model capable of distinguishing between genuine and artificial facial images using 94 percent accuracy.

    The leader author of the research from the University of Aberdeen, Dr Clare Sutherland stated that the research emphasized a need to address the biases in AI.

    She says: as the world goes through massive change brought about by AI, we must ensure that at all times, no one is left out of the equation irrespective of their race, sex, age, or whatever protected characteristics they may have.

    Second, a different set of participants were asked to rank AI and human faces on seventeen features (such as age or symmetry) without informing them that some of the face images were computer generated.

    As a result, it can be assumed that people would mistake an AI-generated face as human because of its larger proportionality, familiarity, and less memorable appearance.

    Interestingly enough, even though some humans cannot differentiate between naturally created images and those processed by AI, this team created a machine learning algorithm that does it with almost a hundred percent success rate.

    Dr Clare Sutherland, one of the authors of this research work from the University of Aberdeen stated that such biases should be eliminated in Artificial Intelligence.

    She explained saying “In this era when the earth adopts AI, everyone has the right not to be discriminated against by any trait such as race, gender, age, and so on”.

  • Making AI work: Transforming the possibilities of AI into tangible outcomes.

    No doubt, 2023 is surely the take-off year for Generative AI. Indeed, Chat GPT made a record-breaking debut to begin its stride in all media headlines, C-level discussions, and board agenda.

    However, as stated in a July 16, 2023 article in The Economist, most employers are not ready for AI. The article was appropriately named “your employer is probably not ready for artificial intelligence”. It also described how this unreadiness would affect wages as well as the wider economy.

    People engage themselves in speculating on the repercussions of AI when they consider job, productivity and life quality. It is amazing for them. Nevertheless, if AI gets used by few firms from several nontechnology industries beyond Silicon Valley, its economic impact will be minimized. That would go beyond using a chatbot just occasionally.

    How long will it be until most companies start reaping the benefits of utilizing all the capabilities offered by AI in the face of scepticism, concern, and unpreparedness?

    There are some things you can wait for but maybe your competitors will never take that much time.

    Other words of The Economist article say, ” Tech-embracing firms lead rivals”.

    Recently, in a survey of 800 non- executive knowledge workers and 800 C-suite executives, consisting of 500 who are the CEOs, 77 % of executives reported that “AI disrupts their company” strategy. Thirty eight %

    However, what is striking about these two statements is that in spite of all industry organizations looking for more investments on AI and even some readiness on how to utilize it without proper preparation for AI return.

    What must organs do in order to realize the real worth of AI today? To truly leverage the transformative power of AI developments?

    Automation.

    Automation is the best path to deliver on whatever AI sparks.

    We think of it as AI at work (and it’s not by accident that AI at Work is the theme of UiPath FORWARD VI + TechEd Day in Las Vegas this week).

    Imagine tapping into the incredible power of AI right now, not years down the line, by embracing automation. This means using automation as a gateway to bring your AI visions to life. It’s about taking any idea or innovation you have and making it a reality with the help of AI, giving you a boosted workforce capacity and the adaptability to turn ambitious ideas into tangible outcomes.

    Let’s look at Wesco as an example. They’ve fostered a culture of citizen development and utilized UiPath products like Automation Hub, positioning themselves strongly to swiftly implement AI-related ideas proposed by their workforce.

    Contrary to The Economist’s statement in July 2023 that few companies outside Silicon Valley are prepared to make an impact with AI, Wesco stands out as an exception. Instead of enforcing AI implementation from the top down, Wesco empowers its employees to identify areas where AI can enhance their roles. Through their center of excellence (CoE), they engage both technical and non-technical staff, meeting them where they are to capitalize on their ideas.

    The outcome? An impressive 60% of executable automation ideas originate from Wesco’s own citizen developers.

    When realizing true value from AI through automation is possible across diverse tasks within various processes of the organization, your strength to do so transforms all these tasks. The result? Incredible cost savings.

    Intuitively, a complicated firm like Intel appears to have a multitude of processes to be automated. Deploying AI and automation, for instance, at the tiresome process of classifying the international shipments. The outcomes were amazing – they categorized more than 56,000 items within four months having over 99% precision. They used a combination of machine learning aided by automation, which is what UiPath AI Center offered.

    Unleash your workforce Using automation as the path to realize true value from AI has tremendous benefits for each and every one of your employees: high level of productivity, innovation, and leading them in the success missions.

    Omega Healthcare is an example where you will notice work of AI. Incorporating AI and automation in Omega Healthcare was beneficial to its staff, stakeholders, and clients. The latter led to an increase of 100% in supercharged employee productivity, resulting in more wins for investors. The employees won by reducing 6,700 hours per month and eliminating wastage of time in writing customer correspondences. Process accuracy improved to 99.5 percent, and turnaround time dropped by 50 percent to give customers an edge as well.

    It happened because AI and automated systems were used with UiPath’s document understanding abilities and the UiPath Action Center via the UiPath platform.

    Indeed, the value of using automation coupled with artificial intelligence also goes beyond one’s enterprise. Every customer encounter is enhanced. Every interaction with get more speed, agility, and individual customization.

    One of the highly used ways through which customer satisfaction can be measured is net promoters score, which means the higher this score is the more positively customers view a certain brand. Utilizing various types of tools across the UiPath Business Automation Platform such as chatbots and natural language processing capability helped flutter UK& I increase its NPS from ten to forty and prevented increase in prices. In addition, they enhanced their abilities to serve clients as each agent was capable of handling escalations, thus cutting their transfer rate at a whopping six percent.

    Customers will be showcased throughout this session, discussing their use of automation and its transformative properties when leveraged with artificial intelligence.

    UiPath brings together the flexibility and speed of enterprise automation, blending it seamlessly with machine learning, natural language processing, and cutting-edge Generative AI and Specialized AI features. This powerful combination paves the way for swift business transformation in the realm of this new AI era.

    Let’s delve into the impressive communications mining capabilities offered by the UiPath Business Automation Platform. Originating from the prestigious University College of London (UCL) Centre for AI Research, our communications mining capabilities continue to lead the pack in machine learning, natural language processing, and AI.

    For those of you attending FORWARD VI in person, make sure to join us in the Keynote Theater tomorrow at 9:40 am PDT. UiPath Chief Product Officer Graham Sheldon will guide us through the efforts of the UiPath product organization to ensure that the UiPath Platform is equipped with the capabilities and technological advancements needed to genuinely bring AI through automation across the entire enterprise, spanning every industry on the planet.

    Graham will also unveil some exciting new AI-powered features. Trust me, you won’t want to miss out on his announcements!

    If you’re eager for one-on-one discussions, head to Expertsville, open from 11:00 am PDT to 3:30 pm PDT on both days of FORWARD VI (October 10 and 11) in the Marquee Ballroom. Expertsville will also welcome you during the Unwind Happy Hour from 5:30 pm to 6:30 pm PDT on Wednesday.

  • Can a profile picture generated by AI enhance your chances of landing a job?

    Filters and Photoshop move over, artificial intelligence (AI) is the new trend for creating online profile pictures.

    During the summer, a TikTok video gained widespread attention. Its caption read, “using this trend to get a new LinkedIn headshot.”

    In this brief yet captivating clip, a young woman showcased her real-life appearance alongside professionally crafted headshot photos, all thanks to an AI-powered app called Remini. The video amassed an impressive 52.3 million views, and similar content from other TikTok creators garnered significant attention.

    Remini, along with competitors like Try It On AI and AI Suit Up, utilizes advanced AI software to generate polished profile photos, designed to emulate the work of a seasoned photographer.

    To harness Remini’s magic, users are prompted to upload eight to 10 selfies, preferably captured from various angles and under favourable lighting conditions. The AI then meticulously studies these images to understand the user’s unique features.

    In just a matter of minutes, Remini begins crafting artificial photos that depict users in a sophisticated and even glamorous light. Different hairstyles, various outfits, and impeccable lighting create a range of professional-looking images. The app goes the extra mile by enhancing skin quality, refining makeup, providing diverse backdrops, and, interestingly, some users even discover a slimming effect.

    With millions of views and a surge in popularity, these AI-powered tools are not just altering photos but reshaping the way individuals present themselves in the professional realm.

    Although the outcomes can also be considered from an objective standpoint – where it is noted that they appear real, other individuals see them as unrealistic.

    Unlike past Internet image editing trends like dramatically altering your hair or eye color and all that was involved in having some fun on social media, this one is clearly centered around Linked In and similar profession-oriented web sites.

    Some people will like the affordable AI services.

    Divya Shishodia, 24, an Australian digital marketer says that albeit these are obviously computer-generated, “some people may lack money to go for real headshots shot by professionals”.

    While the outcome is partially subjective – with some commenting on being real, and others that the images appear to be unnatural.

    However, unlike with previous online image manipulations, whose intention was about having fun on social media, changing your age in photos for jobs is mostly done using LinkedIn and other employment platforms.

    Some find the appeal in affordability of the AI services.

    According to Divya Shishodia, 24, digital marketer, Australia, while AI headshots “are obviously generated, some people might not have the budget to go and get a professional headshot taken”.

    While a visit to a professional photographer may set you back more than £100, Remini and its counterparts typically offer free trials lasting a few days.

    “I’m not claiming they’re the most lifelike, but considering the time and effort required, the output is worthwhile,” remarks Ms. Shishodia. She goes on to explain that attempting to capture a decent profile photo oneself can be quite challenging.

    “You need to consider angles, lighting, trying to avoid shadows… things only actual photographers can handle.”

    Michelle Genobisa, a 26-year-old from Aalborg, Denmark, appreciates the affordability or even lack of cost associated with AI-generated profile photos.

    “I often experiment with my looks, like changing my hair color… so it was an easy way to amass some pictures with the polished effect of a professional photoshoot,” she shares. “To have such a photo taken professionally can be prohibitively expensive.”

    On the flip side, not everyone is enamored with the technology, as Molly McCrann, a 25-year-old actor from Australia, voices her skepticism. “I just think it looks so artificial; you can tell it’s heavily edited or has that unmistakable AI touch,” she remarks.

    Posted picture of mine made look so thin, unlike my natural built.

    Additionally, Ms McCrann states that she believes that it is probably more appropriate to demonstrate the way one really looks to potential employers.

    Nevertheless, she is willing to embrace both sides of the matter. Someone posted something I fully agree on – “this company makes a decision based on appearance” and if so, then I shall put some AI headshots in place so as to get the interview.

    However, what about the extent in which this can affect our self-image. According to consumer psychologist Dr Paul Marsden, there are two sides of the story.

    When I had posted mine it took very slim away and i do not have a shape as depicted.

    Ms. McCrann notes that she believes it would be a good idea to reflect on prospective employer’s true pictures of how she looks like.

    Nevertheless, she can equally take the opposite perspective. Someone did write a comment I concur with, “if this company will base off looks I’ll like to get into the room, and if this will take me to the room, I will utilize AI headshots to get a job interview.”

    However, what concerns regarding our self-image will the better quality AI portraits cause? According to consumer psychologist Dr Paul Marsden, there are two sides to this question.

    Expressing his perspective, he shares with the BBC, “On one hand, using AI allows us to present our best selves, projecting the image we desire to the world. This, in turn, may inspire us to embody those qualities in our real lives.”

    He delves into the psychology of first impressions, emphasizing how quick judgments are made based on initial impressions. Through AI, individuals can position themselves favorably for potential opportunities. However, he acknowledges the flip side, noting that reliance on AI-generated images might impact self-worth, fostering a belief of inadequacy compared to the seemingly flawless AI representation, leading to diminished confidence.

    The question arises: Do recruiters share these concerns? Tristan Barthel, representing London-based Tate Recruitment, has observed a significant uptick in people using AI to enhance their photos. Yet, he asserts that, in his role, it doesn’t alter how he evaluates applications. “I can discern if a picture has been AI-generated, but it wouldn’t sway my decision; for me, qualifications are the determining factor.”