Author: Faraz

  • Oppo has just introduced the global version of ColorOS 14, which is built on Android 14. Here’s a peek at the rollout plan and some exciting features to look forward to!

    The latest version of the custom skin is a global launch, that’s Color OS 14 for Oppo phones. Its Aquamorphic design is in line with the new colorOs version based on android 14. A brand-new Trinity Engine in the ColorOS 14 ensures that RAM utilization as well as storage space is more efficient now. The new UI layer has a number of AI-based features and new privacy capabilities. In a days time, there, Oppo announces ColorOS 14 worldwide beta-version devices names. Last october, several public betas of the latest version have already been sent out among the models like Oppo Find X5 series, Oppo Reno 8 Pro 5G and Oppo A77.

    On the official announcement on Thursday, Oppo announced the launch of the global ColorOS 14 version built upon the latest Android 14. New Aquamorphic UI layer that contains AI smart features, advanced tools to maintain security and privacy and inspired by nature water.

    The latest version of Oppo’s colorOS, called colorOS 14 and has now become a global custom skin. The lastest Aquamorphic desing based on ColorOS androin 14. With the help of ColorOS 14, there is an introduction of a new engine dubbed Trinity that boosts RAM and storage optimisation. Several new UI layer’s elements such as AI-powered features and new privacy functions are incorporated. Just within a few days, Oppo has announced the models of handsets with the global beta of the Colors OS 14. In fact, the public beta for the new OS was released for some models as early as in October including the Oppo Find X5 series, Oppo Reno 8 Pro 5G, and Oppo A77.

    On the occasion of Thursday, Oppo declared general availability of the ColorOS 14 world edition on basis of Android 14. It provides a unique Water-inspired Aquamorphic UI layer with AI-powered smart functionality and a new safety and privacy control tool.

    ColorOS 14 introduces a fresh take on ringtones with aquamorphic themes for calls, alarms, and notifications. There are also seven new global UI sound designs to enhance the overall audio experience. The Aqua Dynamics design is a standout feature, incorporating everyday interactions into visually appealing bubbles, capsules, and panels that seamlessly present information, extending from the status bar.

    In a nod to environmental consciousness, ColorOS 14 offers a Go Green Always-On Display, designed to raise awareness about climate change and environmental protection. This includes three sets of Environment Vision pages, featuring five environment-related animations on each page that dynamically change based on the user’s daily step count.

    The update includes the AI-powered Smart Touch feature, allowing users to easily select and collect text, images, and videos from both system and third-party apps. These can be organized on the File Dock or consolidated into a single note using intuitive selecting and dragging gestures. The new File Dock on the Smart Sidebar facilitates quick content sharing across apps through split-screen, floating windows, or the Dock itself.

    ColorOS 14 introduces the Smart Image Matting feature, enabling users to crop multiple subjects such as people and animals from a single image or paused video. To enhance memory and storage capabilities, Oppo has integrated the Trinity Engine, which includes the ROM Vitalization feature. This feature can save up to 20GB of storage space by compressing app and file data through App Compression and File Compression in the Phone Manager setting. The RAM Vitalization functionality boosts multi-app efficiency and keeps more applications running in the background. The CPU Vitalization feature manages power consumption without compromising performance.

    In terms of battery management, ColorOS 14 incorporates an AI algorithm called Smart Charging. This intelligent feature automatically adjusts the charging current based on the smartphone’s usage status, helping to prevent unnecessary battery wear. Picture Keeper, a new privacy feature, ensures that apps cannot misuse permissions for photos or videos, adding an extra layer of security for user data.

    ColorOS 14 release timeline, supported phones…….

    ColorOS 14 was released for Oppo’s Find X5, and Find X5 Pro flagship phones, along with the Oppo Reno 10 Pro+ 5G, Oppo A77 and Reno 8 Pro 5G phones in October. The Oppo Find X2 series, more Reno 10 series, and the Reno 8 series will get the update in November, followed by older Oppo phones in December and early next year.

    Here is the list of smartphones that will be updated to beta ColorOS 14 along with their scheduled rollout timeline.

  • The upcoming Google Pixel 9 lineup is rumored to debut with Qi2 wireless charging support, a feature that’s already available on the iPhone 15 series. This means users can expect a convenient wireless charging experience similar to what iPhone 15 users are already enjoying.

    The Qi2 charging standard was announced earlier this year at CES 2023.

    Wireless Power Consortium (WPC) introduced Qi2 charging standard, also known as Chi Two, early this year. Following it is the Qi wireless charging standard, that is almost identical to Apple’s MagSafe technology, which was launched together with the iPhone 12, possesses a coil for wireless charging as well as a magnetic system. WPC affirmed that the objective was building an unified system which is anticipated of operating on iphoses and android phones alike. Nonetheless, it has a novel functionality dubbed the Magnetic Power Profile that ensures that devices such as phones are placed precisely for efficient and faster charging. Such brand compatibility is also meant in case of certified devices.

    According to WPC, the first Qi2 productions are “completing” certification testing. That means we can expect the first Qi2 charging devices, which should mark an official start of this technology, to come out soon. Among the first to hit the market would include Qi2-enabled devices by Anker, Belkin, as well as many other accessories brands. Similarly, the WPC said, there are over a hundred devices in the process of Qi2 testing including those still waiting for certification.

    According to the press release, one of the first smartphones using this new wireless charging technology would be that of the iPhone 15 series. At the moment, the only known items that belong under qi2 products are chargers and battery packs, except for i-phone 15 line of smartphones. Additionally, it is uncertain whether older MagSafe-supported iPhones use this standard.

    WPC had previously announced their own charging standard known as Qi2 which is pronounced “chi two”. It follows Qi wireless charging standard, like Apple’s MagSafe technology that was introduced with iPhone 12 and utilizes magnetic coil for wireless charging as well. It said the objective is to evolve a single integrated solution which is anticipated to work with both iOS and Android smartphones. However, it has an additional feature called the Magnetic Power Profile to ensure that phones and other devices are placed accurately for maximum charging effectiveness and rate. Brand compatibility of certified devices will also be ensured by it.

    According to WPC, the first Qi2 productions have been “completing” the certification tests currently. This means that the initial Qi2 charging devices are likely going to be available in a matter of weeks, thus becoming the debut of the technology. The first few devices for the Qi2 charging technology will come from Anker, Belkin, and other accessories brands. WPC also noted that there were over 100 devices in the process of undergoing Qi2 testing or being certified.

    The press release announced that one of the first phones to utilize this new wireless charging technology would likely be in the new iPhone 15 collection. Currently, only chargers and battery packs are known to be Qi2 products except for iPhone 15 series phones. The compatibility of such a standard for old MagSafe-supported iPhones is also doubtful.

    There’s buzz about the upcoming Google Pixel 9 series possibly being among the first Android phones to embrace Qi2 wireless charging technology. A recent blog post by the Wireless Power Consortium (WPC), noticed by The Verge, points out that Liyu Yang, a senior hardware engineer at Google, has joined the consortium’s board. Yang, who’s been involved in developing wireless charging systems for Pixel phones since 2017, is now heading the research and development for the next-gen wireless charging technologies in the upcoming Pixel products.

    The Qi2 platform, aside from enhancing safety features to prevent device damage and battery wear, supports faster 15W charging and comes with foreign object identification capabilities. According to WPC, devices adopting the new Qi v2.0 Extended Power Profile (EPP) without magnets won’t showcase the new Qi2 emblem, making the inclusion of magnets the most noticeable change for consumers. This suggests that the Pixel 9 lineup might bring not only improved wireless charging but also a feature set that aligns with the latest advancements in wireless charging technology.

  • ServiceNow is growing its Now Assist family of generative AI agents.

    ServiceNow, a major player in information technology workflow management, announced today an expansion of its Now Assist family of generative artificial intelligence agents.

    The new additions, including Now Assist in Virtual Agent, flow generation, and Now Assist for Field Service Management, are now accessible to all customers through the ServiceNow Store. These enhancements aim to reduce the time employees spend searching, summarizing, and creating basic information, providing an immediate boost to productivity.

    ServiceNow’s cloud platform is widely used by IT and human resources teams to handle day-to-day tasks, such as addressing employee support requests and managing cybersecurity risks. The platform’s applications extend beyond IT, reaching areas like customer service ticket processing.

    In recent months, ServiceNow has been actively incorporating new generative AI capabilities across its product portfolio, particularly focusing on the Now Assist family of AI agents.

    These agents are integrated with various modules, including IT Service Management, Customer Services Management, Human Resources Services Delivery, and Workflow Creator, prioritizing accuracy and data privacy.

    The introduction of Now Assist in Virtual Agent allows ServiceNow customers to create and deploy generative AI chat experiences quickly, leveraging enhanced guided setup capabilities.

    This enables the creation of customer and employee service agents offering superior conversational experiences, pulling the latest information from internal knowledge bases or service catalogs.

    The Flow generation tool, a new addition, assists administrators and developers in creating workflow blueprints for faster development at scale. It converts plain text into low-code workflows, eliminating the need for manual flow automation creation.

    Developers can then make continuous adjustments and refinements using a no-code design interface in the ServiceNow App Engine, expediting automation rollouts.

    Now Assist for Field Service Management focuses on simplifying the work order process by using generative AI to summarize work order tasks based on all activities and incidental data. This is particularly beneficial for field technicians relying on mobile devices as they move between sites to complete their work.

    While it may seem that ServiceNow is joining the generative AI trend, the company emphasizes that it has invested several years in developing and integrating AI into its platform. Instead of relying on existing generative AI models, ServiceNow has built its own domain-specific Now large language model.

    Trained on its workflows and data, this model is customized to perform specific tasks within the platform, ensuring a better user experience, transparency, and superior governance and data security.

    ServiceNow President and Chief Operating Officer CJ Desai highlighted the role of AI in achieving faster execution and smarter decision-making, positioning ServiceNow at the forefront by intelligently integrating generative AI into the core of the Now Platform.

    This integration allows organizations to harness AI securely and confidently for accelerated business value.

  • Salesforce is working to transform customer service experiences using advanced AI technology that generates solutions.

    Today, Salesforce announced a new addition to its toolkit for customer service—the Service Intelligence application. This application, integrated into its flagship Service Cloud platform, is fueled by Salesforce’s powerful Data Cloud. It empowers service agents and managers by consolidating all necessary information for customer support, eliminating the need to navigate through various screens.

    Service Intelligence aims to facilitate quicker, more informed decisions for customer service professionals, ultimately enhancing customer satisfaction and providing a competitive advantage. The application features customizable dashboards offering a snapshot of critical metrics like customer satisfaction and workloads.

    Managers can efficiently oversee teams, respond promptly to challenges, and identify trends using the Einstein Conversation Mining capability, which analyses customer interactions.

    One notable aspect of Service Cloud is the Einstein Conversation Mining feature, utilizing AI to swiftly identify trends in customer conversations. For instance, if a surge in inquiries about a product’s return policy is detected, AI customer service bots can be trained to address these concerns efficiently.

    Additionally, Service Intelligence allows users to seamlessly transition from the dashboard to data exploration visualizations in Tableau Software, maintaining the same data context. This facilitates sharing crucial insights more effectively.

    While the current features are available, Salesforce plans additional updates, including the integration of Einstein Copilot in Service Intelligence. This will enable users to interact with Einstein, a generative AI assistant, for insights and metrics in a conversational manner.

    Another upcoming addition is Einstein Studio, providing quick access to additional insights such as predicting customer escalations and issue resolution times.

    A new feature called Customer Effort Score is also on the horizon. This tool will provide a comprehensive view of the difficulty of each customer’s service experience, along with recommendations for improving satisfaction, such as offering discounts in challenging situations.

    Salesforce’s ongoing efforts in AI, exemplified by Service Intelligence, underscore its commitment to enhancing customer interactions. The company has consistently introduced new AI features across its various platforms, with a significant focus on research and development in the AI domain.

    Analysts, such as Rebecca Wetterman, emphasize that success in the customer service industry relies on making interactions easy and efficient, a goal that AI can significantly contribute to.

    Salesforce’s AI initiatives have been a continuous effort, involving collaborations with industry leaders like IBM. The overarching aim is to help customers seamlessly adopt and benefit from generative AI tools across different business functions, including sales, marketing, customer service, finance, and more.

  • Geico insurance claims phone number: Your Comprehensive Guide to Contacting Geico

    Introduction:

    Navigating an insurance claim can be challenging, and having the correct contact information is important. We’ll go across everything you need to know about Geico insurance claim phone numbers and contact Geico for various insurance claims in this guide.

     Why Knowing Geico Insurance Claim Phone Number Matters:

    When faced with an unexpected event, having the right insurance claim phone number is vital. Geico offers a wide range of insurance products, each with its claim process and contact info. Let’s go over the important phone numbers you must be mindful of.

    Geico Insurance Claim Phone Number Overview:

    Geico’s general insurance claims phone number is your go-to for a variety of claims, including auto, home, and more.

    Geico Home Insurance Claim Phone Number:

    If you’re dealing with a home-related incident, such as property damage, the Geico home insurance claim phone number is the direct line to start your claim process.

    Geico Homeowners Insurance Claim Phone Number:

    For homeowners specifically, Geico provides a dedicated phone number to streamline the claims process related to home ownership.

    How Do I Contact Geico Claims?

    Understanding how to initiate a claim is crucial for a swift resolution. Geico offers multiple contact options for your convenience.

     How Do I Contact Geico by Phone?

    Geico provides a centralized phone number for general inquiries and claims. Knowing how to reach Geico by phone ensures you can connect with the right department promptly.

    Geico Claims Phone Number:

    Whether it’s an auto accident or property damage, the Geico claims phone number is a direct line to start your claim process.

     How Do You File a Claim with Geico?

    Filing a claim with Geico involves specific steps. We’ll walk you through the process, highlighting key information you need to provide.

    Geico Car Insurance Claims Phone Number:

    For car-related incidents, the Geico car insurance claims phone number is essential. Learn how to report an accident or file a claim efficiently.

    Geico Auto Insurance Claims Phone Number:

    Geico’s auto insurance claims phone number is dedicated to handling claims related to vehicular incidents. Make sure you have this number saved for emergencies.

     Geico Existing Claims Phone Number:

    If you’ve already filed a claim and need to follow up, the Geico existing claims phone number ensures you connect with the right department to check the status and get updates.

     Timely Reporting:

    Whether it’s an accident or property damage, reporting the incident to Geico promptly is crucial. Delays can impact the claims process.

     Complete Documentation:

    Be prepared with all necessary documentation when filing a claim. This includes photos, witness statements, and any relevant documents.

    Open Communication:

    Stay in regular contact with Geico claims adjusters. Clear and open communication can expedite the resolution of your claim.

    Geico Insurance Claim Keywords in Action:

    Throughout your claims process, remember to use the specific keywords associated with Geico insurance claims. Whether you’re searching online or speaking to a representative, these keywords ensure clarity and efficiency.

     Conclusion:

    Navigating Geico insurance claims is more straightforward when armed with the right phone numbers and information. Remember, timely reporting, complete documentation, and open communication are the keys to a smooth claims process.

    FAQs

    1. How do I contact Geico for a new insurance claim?

    Dial the Geico Claims Phone Number to start the process.

    2. What’s the specific phone number for Geico home insurance claims?

    For home-related incidents, use the Geico Home Insurance Claim Phone Number.

    3. Is there a separate phone number for Geico homeowners insurance claims?

    Yes, for homeowners, the Geico Homeowners Insurance Claim Phone Number is available.

    4. Can I check the status of an existing Geico insurance claim by phone?

    Certainly, call the Geico Existing Claims Phone Number for updates on existing claims.

    5. How do I file a car insurance claim with Geico?

    Dial the Geico Car Insurance Claims Phone Number to report a car-related incident.

  • Microsoft just introduced some cool new AI tools, tailor-made for both Windows developers and IT professionals. It’s like they’ve got a whole set of shiny, fresh tricks up their sleeve to make everyone’s job a bit easier and more exciting.

    Microsoft debuts new AI tools for Windows developers and IT professionals.

    Microsoft is really into bringing artificial intelligence to your Windows 11 experience! They just spilled the beans at their Ignite 2023 conference about some updates and cool stuff for both developers and their cloud-based Windows 365 service.

    In the recent Windows 11 update on September 26, Microsoft didn’t hold back. They added a bunch of features for developers, all aimed at making AI a big deal in the operating system. The idea? To make the developer experience super productive. And guess what? They’re rolling out new tools to help developers kickstart their AI projects on Windows, especially for creating and using conversational experiences. It’s like they’re giving developers the keys to the AI kingdom!

    With regards to AI app development made simpler with the launch of its successor, Windows AI Studio, one can now get access to AI development tools and models made available through Azure AI Studio announced during Ignite, as well as models from other distributors including Hugging Face Inc. whereby developers can

    Instead of large language models like OpenAI LP’s GPT-4, small language models (SLMs) have been designed, which are lightweight and provide a solution for specific industry-focused or business purposes. On the contrary, LLMs are supposed to be generic and occupy the processors’ cycles extensively; as such, it is costly to implement them, hence they normally sit in the cloud and execute over an array of Graphics Processing Units (GPUs). Additionally using an SLM implies that all data processed is done locally therefore no data exits giving them more secure.

    Developers get guided workspace setup for model configurations while fine tuning of the popular SLM’s like Phi, llama 2 and Mistral in AI studio. Developers can then quickly cycle through different models and refine them by accessing Prompt Flow and gradio template right from the work space.

    The company is also going to release a Visual Studio Code extension in the upcoming weeks that would make it easy for devs who wish to start on AI app development to integrate their models into different Windows applications. It also comes with a guided interface in the form of a workspace that helps developers concentrate on developing and enables the Studio which prepares the environment automatically and offers AI tools.

    AI Studio is about to get even cooler with new capabilities. Soon, it will bring you a bunch of fancy models designed specifically for Windows GPUs. First up on the list are Llama2-7B, Mistral-7B, and Stable Diffusion XL.

    Now, let’s talk about Microsoft’s Copilot AI-assistant. It’s like this super-smart helper integrated into Windows 11, the Edge web browser, and Bing search. It can chat with you, help out with tasks in Windows, write documents in Word, crunch numbers in Excel—basically, it’s your AI sidekick.

    But that’s not all. Developers, listen up! You’ll soon have the power to create and use small language models on your own turf. This means you can build custom AI-powered apps that chat with users, handle questions, automate tasks, and solve problems, all tailored to specific situations.

    Oh, and there’s more good news. Microsoft is giving some love to Windows 365, the cloud-based OS service for Windows 11 or Windows 10, and Azure Virtual Desktop. Admins, get ready for an easier time deploying desktops and systems to both in-office and remote workers.

    Windows 365 lets you launch a full operating system from anywhere on any device—MacBooks, iPads, Linux devices, Android phones—you name it. It’s like having your computer in the cloud. And for developers, there’s a Dev Box for creating personalized dev environments in the cloud, accessible from any device.

    Azure-based Windows Virtual Desktops take a different approach, providing virtualized apps and remote desktops for Windows 10 and Windows 11.

    Now, for some exciting previews. Windows 365 is gearing up for graphical processor unit support, perfect for creative tasks like graphic design and 3D modeling. There’s also a sneak peek at AI capabilities for Cloud PC resources, offering recommendations to save costs and boost efficiency.

    Security buffs, Microsoft’s got your back. Single-sign-on and passwordless authentication support is ready to roll for Windows 365 and Azure Virtual Desktop. They’re also adding features like blocking screen captures, protecting against tampering, and even a lockbox capability to keep your data safe from prying eyes.

    And get this, soon organizations can encrypt their Windows 365 Cloud PC instance disks with their own encryption keys. This means top-notch security for your data while it’s resting in the cloud. Microsoft’s making sure your information stays safe and sound.

  • Microsoft is supercharging the development of artificial intelligence with enhancements to Fabric, Azure AI Studio, and updates to its databases.

    Microsoft Corp. is raising its game for artificial intelligence developers with a host of updates to its Azure cloud computing platform.

    During its Ignite 2023 conference, Microsoft highlighted that the key to making artificial intelligence (AI) successful is having access to the right data. They introduced Microsoft Fabric as part of the Intelligent Data Platform, a tool that combines different databases, analytics, and governance services. This helps companies seamlessly connect their crucial data with their AI processes. Additionally, Microsoft unveiled Azure AI Studio, which comes with improved AI search features and content safety measures.

    According to Jessica Hawk, Azure’s corporate vice president of data, AI, digital applications, and marketing, every AI application has its foundation in data. Therefore, a company needs a complete data estate to support artificial intelligence innovation; however, this is not easy to develop if there are fragmented data systems.

    The company claims that, Microsoft Fabric, which was announced in May at Build 2023, is a fabric comprising the essential data and analytics software for developing leading-edge AI apps and systems together. In other words, the new software as a service solution combines different platforms, such as Data Factory, Synapse and Power BI, in the form of one product that replaces these disparate systems with an easy to install, use and administer, as well as cost-efficient alternative aimed at However, Microsoft already provides a full end-to-end AI development platform named Azure OpenAI, with which one can generate different types of sophisticated next generation AI applications. This procedure, however, must be backed by an uninterrupted stream of clean datum coupled with assimilated knowledge.

    The company explained that with the provision of Fabric; the developer will have a full-fledged data platform instead of the confusing maze of the unrelated equipment. All necessary abilities to derive meaning from data and share it with AI systems are present in this platform; moreover, it’ll add its own copilot tool in preview that will allow programmers communicating via natural language commanding.Microsoft Fabric is like the boss of an open data lake platform called OneLake. This platform keeps everything in one place, so there’s no need to mess around with moving or copying data. With OneLake, Fabric makes sure that data is properly managed and has a pricing system that grows as you use it more. Plus, it’s open, meaning you’re not stuck with one company’s way of doing things. Fabric also plays nice with Microsoft 365 apps like Excel, Dynamics 365, and Microsoft Teams. It’s all about making things work together smoothly.

    Fabric Microsoft also rolled out a public preview of Azure AI Studio to aid AI developers with every tool required in dealing with generative AI models. This facilitates the use of the latest large language models and also integrates data to enable retrieval augmented generation (RAG), which refers to AI systems that make use of company’s private datasets in intelligent search, full-cycle model management and content safety tools. However, Prompt Flow remains the primary function of Azure AI Studio, which helps in managing prompt orchestration and LLMOps. Azure Machine Learning service has also introduced a new feature called Prompt Flow that makes it possible to prototype, experiment, and iterate, as well as deploy AI applications.

    Another tool which is offered under Microsoft Azure AI Studio includes Azure AI Search that was previously called Cognitive Search. This allows for quicker and more accurate data retrieving in order to enhance the quality as well as response times of AI. This approach known as retrieval augmented generation lets LLMs to integrate information from various external sources and consequently develop better answers.

    RAG has a critical significance in some types of cases where an agent’s customer service demands valid responses drawn from trustworthy information. Powered by vector search an information retrieval method across diverse data forms like images, audio, text and videos. Therefore, vector search is one of the crucial functions for obtaining fuller information on AI.#

    Semantic ranker in Azure AI search enables one to utilize the same search re-ranking technologies that underlie Microsoft Bings searches to rank responses based on relevance.

    Azure AI Content Safety gives developers tools to check how well their models are doing, making sure they’re accurate and free from biases or other security risks. It comes with cool features like the Responsible AI Dashboard and model monitoring. Microsoft also promises to have the back of Azure customers, protecting them from legal troubles related to copyright infringement. This commitment, called the Customer Copyright Commitment, applies to everyone using the Azure OpenAI Service. It’s basically Microsoft saying, “We’ve got your back if anything goes wrong, and we’re confident in our ability to keep your AI development safe.”

    Microsoft has some really big news for AI developers, and there’s even more coming their way. Picture this: GPT-4 Turbo with Vision is on the horizon for Azure OpenAI Service and Azure AI Studio. It’s like the superhero of language models, brought to you by Microsoft’s partner OpenAI LP. This powerhouse gives developers the keys to unlock multi-modal capabilities in their AI applications. Translation: AI that can not only understand language but also see and make sense of things, whether it’s in the real world, videos, or elsewhere.

    But wait, there’s more! The Azure Cosmos DB database service is levelling up, aiming to be the go-to choice for AI developers. Now, it’s all about dynamic scaling – the ability to flexibly adjust database sizes based on demand. This ensures developers are using just the right amount of cloud resources, optimizing their costs while rocking the AI world.

    Microsoft have also added more features to this version like; availability of Azure Cosmos DB for MongoDB vCrore and vector search in Azure Cosmos DB for MongoDB vCrore. This gives developers the capability of creating smartly functioning applications which utilizes data that is saved in Azure Cosmos DB using complete support for MongoDB. This will translate into more native Azure integration, which will also lead to the total cost of ownership reduction for MongoDB developers with the same familiar database service experience.

    It would allow searching of the unstructured data in Azure Cosmos DB using vector search technique and storing these data as embeddings. This helps in avoiding the hassle of data transmission from the platform to a dedicated platform having a vector search functionality.

    Besides, Microsoft is improving Azure Kubernetes Service, a controlled form of open source Kubernetes program that manages software containers for the parts of AI apps.

    The company said that AKS now supports specialized machine learning workloads such as LLMs with virtually no overhead to provision them ready for operations. The updated AI toolchain operator for LLMs in AKS scales across CPU and GPU cluster node.

    It does this by choosing the best infrastructure for each model out of the options that exist in the market. Thus, developers can allocate inferencing costs among a few low-GPU count virtual machines.… This helps lower costs and increase coverage regions for LLM workload. Moreover, elimination of wait times associated with higher GPU count VMs is included. The company further added that customers will as well be able to use preset models which are hosted on AKS thus shorten the setup time of the inference service.

    Guess what’s new in AKS? They’ve introduced the Azure Kubernetes Fleet Manager, and it’s like the superhero for managing workload distribution across Kubernetes clusters. This means it helps make sure everything is running smoothly and keeps your platform and applications up to date. Developers can breathe easy knowing they’ve got the latest and most secure software running the show.

    And that’s not all! Microsoft just dropped the latest version of its .NET computing framework, and it’s called .NET 8. This update is a game-changer for Windows, Linux, and macOS operating systems. It promises a big leap in performance and productivity for ASP.NET and .NET developers, setting the stage for a whole new era of super-smart cloud-native applications that play nice with Azure.

    Hold on, there’s more goodness. .NET 8 comes with fancy features like support for Azure Functions, Azure App Service, AKS, and Azure Container Apps. They’ve jazzed up Visual Studio, their integrated development environment, and made friends with GitHub and Microsoft DevBox. It’s like Microsoft is rolling out the red carpet for developers!

  • YouTube will provide a feature allowing users to flag songs created by AI that imitate the voices of real artists.

    Record companies will be able to request removal of content that mimics an artist’s ‘unique singing or rapping’.

    YouTube has rolled out new rules. Now, if music companies don’t like songs that use computer-made versions of singers’ voices, they can ask YouTube to take those songs down.

    YouTube is bringing in a tool that lets music labels and distributors point out content that copies an artist’s voice. This is happening because of the rise in fake songs made by computers. This tech, called generative AI, can make text, images, and even voices that seem very real.

    For instance, there’s a song called “Heart on My Sleeve” that claims to have vocals made by AI imitating Drake and the Weeknd. It got taken off other music platforms when the record company, Universal Music Group, said it broke the rules by using generative AI. But, you can still find the song on YouTube.

    In their blog, the platform owned by Google stated that they will first test-run these new controls among some selected record companies and independent labels before the final deployment can be effected with every label or company. Youtube indicated that this small group were also involved in undetailed early AI music experiments, which involves using generative AI technologies to produce material.

    In addition, a new update of YouTube’s privacy complaint process will enable citizens to file complaints against deep fakes.

    Accordingly, the platform claimed, “‘We will allow requests for takedown of AI-created or other synthetic/manipulated content appearing to depict an identifiable person, such as their face or voice’, except may be parodies and celebrity depictions”.

    Nevertheless, not all content would be pulled off Youtube and our evaluation of such requests would take into account many things – Jennifer Flannery O’Connor and Emily Moxley; two product management VP’s from youtube blogpost.

    Similar to YouTube, creators will be required to declare that they used realistic looking “manipulated of synthetic” content, including computer generated products. Creators will have a chance to flag synthetic content when it is uploaded. However, repetitive noncompliance with the rules would lead to removal of content and suspension of advertising payments on YouTube.

    In a blog post, the Google-owned platform said that it will trial this control to chosen number of labels and distributors before going for global deployment. The select group was also involved in unrevealed “early AI music experiments” involving generative tools for producing contents.

    The latest update on the privacy complaint process, YouTube will also let people register complaints about deepfakes.

    The platform further said, “we will enable users to demand the take down of synthetic or manipulated content that purports to represent one specific person, whether through artificial intelligence (AI) generation or otherwise. This includes instances of parody or deepfakes relating to a public figure, politician, famous

    However, not everything that will be removed from youtube and we will consider various aspects, such as compliance with local laws or whether the content is inappropriate, when dealing with this kind of demands”.

    You tubers will also be asked to declare whenever they create “manipulated or synthetic” content that resembles real life. Uploading of content, creators will have an option of flagging synthetic footage. YouTube would warn about persistently violating these guidelines as it might lead to deletion or suspension of payment for advertisements.

    YouTube has come up with new rules for artificial intelligence, and they’re especially crucial when it comes to talking about sensitive stuff like elections, ongoing conflicts, public health crises, or public officials.

    Now, if a label marks a video as AI-generated, you’ll see that info in the video’s description. But, if the content talks about sensitive topics, there will be a more noticeable label. If any AI-made content breaks the existing rules, like making a violent video just to shock people, YouTube will take it down.

    Last week, the big company that owns Facebook and Instagram, called Meta, also said that political advertisers on their platforms have to admit when they’ve used AI in ads. They want transparency, so if a picture, video, or sound is used to make it look like someone is saying or doing something they didn’t, advertisers have to tell you.

    In the UK, the government is worried about deep fakes, which are fake videos or audio clips that can mess with the information we get and make us trust things less, especially during elections. Just last week, there were fake audio clips going around on social media, pretending to be the mayor of London, Sadiq Khan, saying things he didn’t actually say about Armistice Day.

  • “Concerning”: Australian researchers create believable misinformation about vaccines and vaping using AI.

    Experiment produced fake images, patient and doctor testimonials and video in just over an hour.

    It’s quite unsettling: a recent experiment had researchers delving into the world of artificial intelligence to craft over 100 blog posts spreading health-related misinformation across various languages. The results were so concerning that the researchers are now urging for stricter accountability within the industry.

    What happened was, two individuals from Flinders University in Adelaide, without any specialized knowledge in artificial intelligence, took to the internet to figure out how to bypass the safety measures in AI platforms like Chat GPT. Their goal? To churn out as many blog posts containing false information about vaccines and vaping in the shortest time possible.

    Remarkably, within just 65 minutes, they managed to create 102 posts targeting different groups, including young adults, young parents, pregnant women, and those with chronic health conditions. These posts weren’t just text; they included fabricated patient and clinician testimonials and references that looked scientifically legitimate.

    But that’s not all. The AI platform also whipped up 20 fake but realistic images to go along with the articles in less than two minutes. These included pictures depicting vaccines causing harm to young children. To top it off, the researchers even produced a fake video connecting vaccines to child deaths, and it was available in over 40 languages.

    The experiment serves as a stark reminder of the potential misuse of AI technology and highlights the need for increased responsibility and safeguards in the industry.

    On Tuesday, a study was published in a US journal entitled JAMA Internal Medicine. One such concern regarding AI is disinformation or deliberate dissemination of unreliable, false or purposively misleading data as it was said by leader editor, a pharmacist Bradley Menz.

    As he put it, it aimed at testing how difficult it is to circumvent security measures with intention on creating deceitful health related content on a wide basis.

    As a result, Menz expressed his opinion: “we would suggest the developers of such platforms for health professionals, or the members of the public themselves who would be sharing their experiences through them. We think, they should have a mechanism to indicate an alarming material generated on it. Once a health

    Despite being an old study, this was done by Prof Matt Kuperholz (a former Chief Data Scientist with PWC). It took “one minute” to verify and the author was able to recreate this result.

    “The inherent problems that are there are not going to be solved by any kind of internal safeguard”, as stated by Kuperholz, an Australian researcher based at Deakin University.

    A new research finding has been featured in the US journal JAMA Internal Medicine. Artificial Intelligence (AI), being one of the worries the lead author Bradley Menz said, creates intentionally misleading or fake information.

    He stated that the researchers wanted to find out how simple or difficult breaking the system safeguards against such actions would be. Subsequently, the study was revealed publicly to declare that many of the currently free online tools can create misleading health information at an alarming pace.

    Menz noted that the concerned parties like medical practitioners or members of the community should be allowed to provide information about such matters. They must also have feedback from the developers.

    According to Prof Matt Kuperholz, who is an ex-chief data scientist with PwC, even though there have been improvements in AI platforms as well as stronger security measures since the study, I could still obtain similar results within one minute.

    According to Kuperholz, a researcher at Deakin University, no level of inbuilt safeguards will prevent misuse of the technology.

    Kuperholz advocates for a comprehensive approach to combat disinformation, emphasizing accountability for content publishers, whether they are social media platforms or individual authors. He also suggests the development of technology capable of evaluating the output of various artificial intelligence platforms, ensuring the reliability of information.

    He articulates the need to transition from a state of implicit trust in what we encounter to establishing a trust-based infrastructure. Rather than simply blacklisting untrustworthy tools, he proposes a whitelist approach, endorsing tools and information deemed trustworthy.

    Acknowledging the immense positive potential of these technologies, Kuperholz emphasizes that their constructive applications should not be overlooked.

    Adam Dunn, a professor of biomedical informatics at the University of Sydney, challenges the alarmist tone of Menz’s study findings. Dunn argues against unnecessary fear, noting that while artificial intelligence can indeed be employed to create misinformation, it is equally capable of generating simplified versions of evidence. He cautions against assuming that the existence of misinformation online directly translates to a shift in people’s behaviour, emphasizing the lack of substantial evidence supporting such a correlation.

  • A survey discovered that AI-generated images of people with white faces are more realistic than photographs.

    Photographs were seen as less realistic than computer images but there was no difference with pictures of people of colour.

    Imagine a situation right out of a movie – technology so advanced that it not only sounds more real than actual humans but looks even more convincing. Well, it seems that moment is already here.

    A recent study discovered something quite surprising: people are more likely to mistake pictures of AI-generated white faces as real humans compared to actual photographs of people.

    The researchers reported, “Remarkably, white AI faces can convincingly pass as more real than human faces – and people do not realize they are being fooled.”

    This research, conducted by a team from Australia, the UK, and the Netherlands, has significant implications in the real world, especially in areas like identity theft. There’s a concern that individuals might be deceived by digital impostors.

    However, the study found that these results didn’t apply to images of people of colour. This discrepancy might be because the AI faces were predominantly trained on images of white individuals.

    Dr.Zak Witkower, one of the researchers from the University of Amsterdam, pointed out that this could have consequences for various fields, from online therapy to robotics. He mentioned, “It’s going to produce more realistic situations for white faces than other race faces.”

    The team warns that this situation could lead to confusion between perceptions of race and perceptions of being “human.” Moreover, it might perpetuate social biases, impacting areas like finding missing children, as AI-generated faces play a role in such efforts.

    The study, published in the journal Psychological Science, details two experiments. In one, white adults were shown AI-generated white faces and real human white faces. Participants had to determine whether each face was AI-generated or real and rate their confidence.

    Results from 124 participants showed that 66% of AI images were considered human compared to 51% of real images. Interestingly, this trend did not apply to people of colour, where both AI and real faces were equally judged as human. The researchers also noted that participants’ own race did not affect the results.

    In another study, subjects were requested to provide subjective evaluations of AI and human faces for 14 parameters (e.g., age or symmetry) in a blinded setup such that they had no knowledge which face was computer generated or not.

    The team’s analysis was based on 610 participants whose responses implied that individuals are likely to mistake AI for a human if they had more proportionate face, familiarity with it, or low memorability levels.

    However, in a somewhat ironic twist of fate, the research team managed to create an automated deep learning model capable of distinguishing between genuine and artificial facial images using 94 percent accuracy.

    The leader author of the research from the University of Aberdeen, Dr Clare Sutherland stated that the research emphasized a need to address the biases in AI.

    She says: as the world goes through massive change brought about by AI, we must ensure that at all times, no one is left out of the equation irrespective of their race, sex, age, or whatever protected characteristics they may have.

    Second, a different set of participants were asked to rank AI and human faces on seventeen features (such as age or symmetry) without informing them that some of the face images were computer generated.

    As a result, it can be assumed that people would mistake an AI-generated face as human because of its larger proportionality, familiarity, and less memorable appearance.

    Interestingly enough, even though some humans cannot differentiate between naturally created images and those processed by AI, this team created a machine learning algorithm that does it with almost a hundred percent success rate.

    Dr Clare Sutherland, one of the authors of this research work from the University of Aberdeen stated that such biases should be eliminated in Artificial Intelligence.

    She explained saying “In this era when the earth adopts AI, everyone has the right not to be discriminated against by any trait such as race, gender, age, and so on”.