Category: Artificial Intelligence

  • “Elon Musk Reveals ‘Grok,’ xAI’s Latest Chatbot Creation”

    “Elon Musk Reveals ‘Grok,’ xAI’s Latest Chatbot Creation”

    Jaap Arriens—NurPhoto/Getty Images

    “Elon Musk Introduces His Own AI Chatbot to Challenge ChatGPT, Asserting Prototype’s Superiority Over ChatGPT 3.5 in Multiple Benchmarks.

    Named ‘Grok,’ this AI bot is the inaugural creation of Musk’s xAI company and is currently undergoing testing with a limited group of users in the United States. Grok is being developed using data from Musk’s X, formerly known as Twitter, which enables it to stay better informed about the latest developments compared to other bots relying on static datasets, as mentioned on the company’s website. Additionally, Grok is designed to respond with a touch of humor and a hint of rebellion, as stated in the official announcement.”

    “Earlier this year, Musk joined a group of signatories in a petition urging a temporary halt to the progression of AI models, in favor of creating collaborative safety protocols.

    “I added my signature to that letter fully aware of its limited impact,” shared the billionaire who owns X and serves as the CEO of Tesla Inc. in a recent Sunday post. “I merely wanted to formally express my support for a temporary pause.”

    President Joe Biden of the United States has recently issued an executive order concerning AI oversight, with the aim of establishing standards for security and privacy safeguards. Simultaneously, technology leaders and academics engaged in discussions about the risks associated with this technology at the AI Safety Summit in the United Kingdom last week.”

    “Grok has been created within a two-month development timeframe, as stated in the xAI announcement. Once it completes the testing phase, it will be accessible to all X Premium+ users. Elon Musk has articulated his aspiration to expand X beyond its initial role as a social platform and transform it into a versatile application, similar to Tencent Holding Ltd.’s WeChat in China. Grok will play a crucial role in this endeavor. Although xAI is an independent company, it has expressed its intention to collaborate closely with X, Tesla, and other enterprises.”

  • Musk has mentioned that his latest AI chatbot comes with a touch of humour.

    Elon Musk has introduced an AI chatbot named Grok on his social media platform X, formerly known as Twitter, but currently, it’s only accessible to specific users.

    As stated on X, prior to its release, “in some of its aspects in some respect, it is the best at the moment.”

    Mr. Musk bragged that Grok “likes a lot of sarcasm”, and so, it would be “playfully” responded to any questions thrown at it.

    However, there are initial hints that it faces some of the limitations plaguing other AI-based technologies.

    Others will not answer some of those questions like giving legal counsel to criminals. However, according to Mr. Musk, Grok would respond to “spicy questions which few other AIs will address”.

    Mr Musk on his part posted on demonstrating the new tool where Grok was requested to provide a recipe on how to make cocaine.

    It replied “hold on a moment while I search for the recipe… I’m sure I will be able to give you some assistance” providing general rather than suitable details, spiced with sarcasm preceded by the statement of caution.

    The piece jokingly referred to convicted crypto entrepreneur Sam Bankman-Fried as happy as it claimed it took eight hours to get a guilty verdict and return in less than five minutes.

    The use of generative AI tools such as Grok has often been denounced due to its high level of style coupled with simple mistakes.

    He wrote in his X blog prior to their release, “in many aspects, it is the best that presently exists.”

    Mr. Musk said that “Grok loves sarcasm” and would reply jokes.

    Nevertheless, the early performance indicates its typical issues related to any other artificial intelligence tool.

    However, other models ignore some questions like giving criminal advice. However, Mr. Musk stated that Grok will respond “to spicy questions that are refused by most other AI systems.”

    A new tool is shown in motion, presented by Mr Musk Grok was asked how to make cocaine.

    It replied “hold on let me find a recp… as I’ma assist you all about that”, presented bullet points of generic information unpractical for action, including snide remarks, then warned away from acting upon it.

    It sounded pleased with the trial of Sam Bankman Fried but wrongly stated that the Jury issued a verdict after eight hours not within 5 hours.

    Although these generative AI tools such as Grok have been highly praised, they also contain many fundamental errors that make them seem very credible and authentic when it comes to their approach towards writing.

    The team behind Grok xAI was established in July, bringing together talent from various AI research companies. It operates as a separate entity but maintains a close connection with Mr. Musk’s other ventures, X and the electric car company, Tesla.

    Earlier this year, Mr. Musk expressed his desire for his AI version to be a “truth-seeking AI” that aims to comprehend the fundamental nature of the universe.

    One notable advantage of Grok is its access to real-time information from the X platform, setting it apart from initial versions of some competitors. However, increasingly up-to-date responses are also available for paying customers using other AI tools.

    Grok is currently in a testing or “beta” stage but will eventually become accessible to X’s paying subscribers. Mr. Musk mentioned on Sunday that the chatbot will be “integrated into the X app and will also be offered as a standalone app.”

    This is what Mr Musk confessed last week during an AI Summit held in the UK.

    However, he is also a long time advocate for the product. He was one of the founders in the firm OpenAI developed the largest free chatbot, the ChatGPT of last year. However, Microsoft has put its money into OpenAI which makes the program accessible via its platform.

    Since then Google introduced its counterpart AI model named Bard while Llama by Meta also debuted. Such instruments were created for generating textual response resembling of those typed by humans but based upon the previous data taken in.

    This is a term invented by Sci-Fi author Robert A. Heinlein in his 1961 book entitled Stranger in a Strange Land. It “grokked” meant to understand other deep inside.

    Nonetheless, xAI argued that Grok, the model for the new technology of artificial intelligence, was based upon the Hitchhiker’s Guide to the Galaxy by Douglas Adams, which had initially started in England as a radio broadcast, then published in a book form and, eventually, also

    AI explained Grok was “designed to respond to nearly any question and, by far the more difficult, should also propose ones that need to be posed”.

    It stated, “Grok was a very early beta version – the best that we could achieve in only two months’ time.”

    In last weeks’ UK’s AI summit, Mr Musk admitted that there was danger associated with AI development.

    However, he is also one of the longest supporters for technology. In addition, he is one of the founders behind OpenAI’s creation of ChatGPT, the first AI-based software tool that went public last year. This is an open-source artificial intelligence tool that is accessible through the Microsoft platform.

    Since then, there is Google’s alternative AI model called Bard and Meta’s model known as Llama. These devices are meant to incorporate previously taken content and utilize it to answer questions in a format that appears like human produced text.

    The term “Grok” was coined by science fiction author Robert A. Heinline in his 1961 novel Stranger in a Strange Land. It entailed grokking which referred to deep understanding of other people.

    Xai however, noted that Grok is a model of the Hitchhiker’s guide to the galaxy created in the 1980s.

    As stated by xai (2017), Grok “was designed to address almost all issues, and if possible, raise relevant questions about them.”

    It continued, “Grok was a very early beta product – the best we could produce with two months of training.”

  • The United States has just unveiled its “most robust worldwide effort to date” in ensuring AI safety.

    The White House has made an announcement, characterizing it as “the most substantial steps ever undertaken by any government to promote advancements in AI safety.”

    The United States’ president is Joe Biden. Through an executive order, he has ordered AI producers to disclose safely related issues to the country’s government.

    The US is in the center stage of the international arguments concerning AI governance.

    This week, the UK government has organized an AI safety summer with prime minister Rishi Sunnak.

    The two days meeting in Bletchley park starts 2nd of November. This comes following fears that as AI machines become even more advanced, it may result in issues ranging from the creation of stronger bio-weapons and highly incapacitating cyber-assaults.

    In his remarks, Mr Biden promised to exploit the strength of AI without compromising the safety of the American people.

    The US Government needs information from developers of Artificial Intelligence, an order by President Joe Biden mandates it.

    More essentially, it makes the US at the centre stage in the ongoing global discourse on AI governance.

    This week, UK hosts a summit on AI Safety, under Prime Minister, Rishi Sunak.

    That would be a two-day conference starting November 1st at Bletchley Park. This has resulted from fears that at a faster pace these modern AI systems may lead to manufacture of more dangerous biowarfare materials or crippling cyber-assaults.

    In his announcement, Mr. Biden pledged his commitment to “utilizing the power of AI in order to protect American citizens”.

    Speaking to the BBC, the tech entrepreneur and AI expert Gary Marcus said that the American plans appeared to be more comprehensive in their nature.

    The initial goal of Biden’s executive order is quite ambitious. It has teeth in terms of addressing existing as well as future challenges.

    “UK summit appears to be very focused regarding long term risks and is hardly focusing on the today’s issues and what exactly this has come up with or what exact power it holds, in order words.”

    In an interview with BBC, Alex Kradosdomski for Chatham House confirmed that this Executive Order demonstrated America’s role as a head-boss that knows all the ways to combat such challenges.

    Tech entrepreneur and AI expert Gary Marcus told the BBC it appeared larger on both scope and ambition.

    He stated that “this is a high initial bar for Biden’s executive order. The executive order has a broad scope, addressing both immediate and long-standing risks and perhaps, maybe, some teeth.”

    The UK summit looks like to concentrated mainly on the risk in long term, hardly on the current time and for sure is difficult understand, what authority this summit will be having and to what extent it will bite.

    According to the BBC, Alex Krasodomski, a senior research associate based at Chatham House pointed out that the executive order had portrayed the United States to be leading on the means through which similar threats should be dealt with.

    On Monday, Mr Biden told reporters and tech workers at the White House: This executive order, which is a hall mark in itself as it pushes the border beyond man’s capability, showcases what we believe in, as the artificial intelligence stretches the boundaries of human possibilities as well as the limits of human comprehension.

    There is safety, security, trust, openness, American leadership and those inherent rights that our creator gave us and cannot be taken away by any other creation.

    The US measures include:

    The formation of new safety and security standards for AI such as mandatory sharing of the result test for safety made by AI company with federal government.
    Building consumer protection by developing standards that agencies may rely upon when evaluating privacy techniques utilized in artificial intelligence (AI)
    How to stop AI algorithms discriminate and establish good practices regarding AI role in justice process.
    Developing a program to assess any AI that could be bad for health or school and setting up guidelines on how teachers should learn to work with smart systems.
    Implementing universal AI standards with international partners.
    The Biden administration is also beefing its work-related AI. Starting from Monday next week, job opportunities for AI skilled workers can be found on AI.gov.

    On Monday, Mr Biden told reporters and tech workers at the White House: This landmark executive order attests that it is “as artificial intelligence extends the envelope of human capacity, and pushes the limits of human perception.”

    Safety, security, trust, freedom, America’s leadership and natural born rights granted us by our Creator which no creature can do away with.

    The US measures include:

    The development of new safety and security standards for AI, involving mandatory sharing of safety tests by AI firms with the federal authorities.
    Preserving privacy of consumers via outlining criteria for evaluating privacy mechanisms of AI agencies.
    Developing tools that help halt AI algorithms discriminating and establishing protocols for the right use of AI systems in criminal justice.
    Develop a programme for assessing AI-related healthcare practices that could be dangerous, along with assets regarding ethical ways of using AI tools in education. further reading.
    Promoting AI standards globally by working with international partners.
    Additionally, the Biden administration is increasing its efforts in boosting its AI work force. Starting from Monday, job opportunities for AI experts in federal government could be found at ai.gov.

    This is an essential order, but might be out of Britain’s vision and aim for the summit.

    However, despite mentioning the UK summit in the executive order falling under the ‘advancing American leadership abroad’, it should be noted that what really drives this initiative and leads in this arena is the America companies striving to beat off their counterparts from China.

    Mr Krasodomski added: It is always cumbersome having a little technologically inclined summit. Nonetheless, for such a significant global impact, many of these actions are required in various parts of the globe.

    Kamla Harris, the US vice president, and a team comprising some US tech giants are expected to be in attendance at the upcoming AI summit this week.

    It is a vital order, which does not have to follow the UK’s goals from the Charter and the Vision of the summit.

    In truth, though the UK summit appears in an executive order that reads as “Advancing American leadership abroad”, it should be noted that American companies within the country lead the way and drive the move, competing against the Chinese.

    Mr Krasodomski added: It would be hard to assemble a small technology-based summit. Even so, this kind of influence on the world can only be possible once people work together with the product in many countries.

    U.S. Vice president, Kamala Harris and a contingent of U.S. tech giants will be in attendance at the AI summit that is happening this week in the U.K.

    Concerns about the potential impact of the emerging edge AI constitute the key issue at the summit. Also attending will be the President of the EU Commission, Ursula von der Leyen, and the UN Secretary General, Antonio Guterres.

    UK sets its eyes onto becoming world’s leading entity on risk reduction of this powerful device. Nevertheless, the EU is adopting the AI act, China has already imposed tough AI restrictions, and the US is now issuing such guidelines.

    Additionally, the G7 as per Reuters news agency has been developing a code of conduct for companies producing sophisticated AI technology.

    This raises the question of what else may be discussed about Bletchley park this week considering all these developments.

    Frontier AI is at the center of concern for the conference that is summited by the issues of growing worry about future-generation artificially intelligent systems. However, it should be noted that the president of the EU Commission, Ursula von der Leyen, and the United Nations’s secretary general, Antonio Guterres have also been invited.

    Thus, the UK intends to become one of the leading players in reducing risks related to this power full technology. Notwithstanding, recently, the EU is in the course of developing an AI act, china has implemented stringent AI regulations while us has issued this directive.

    For example, the G7, as cited by Reuters news agency, has been discussing a code of conduct that should serve as guidelines for those who develop sophisticated AI.

    These developments make us wonder what will be left for discussion at Bletchley park this week.

  • Elon Musk informs Rishi Sunak that artificial intelligence will ultimately eliminate the need for work.

    Tech billionaire Elon Musk has predicted that artificial intelligence will eventually mean that no one will have to work.

    This is with regards to what happened to be a very unusual “in conversation” event that took place between him and the Prime Minister Rishi Sunak, towards the tail end of this week’s summit on AI.

    Mr musk predicted during a 50-minute interview that paid labour would be rendered obsolete by technology.

    He cautioned about robot-humanoids “that can track you down anywhere”.

    The duo discussed that London was one of the top cities driving the AI field, and how such a transformation could occur in terms of education.

    However, there were also some darker twists to the chat with Sunak acknowledging “people’s anxiety” regarding jobs being usurped along with them both agreeing on having a referee who would control super-computers in the future.

    Speaking with Prime Minister Rishi Sunak during the last of Week’s Summit on Artificial Intelligence, this is indeed a very unusual ‘In Conversation’ event.

    During the 50 minute interview, Mr Musk predicted future of the tech which would lead to job redundancy.E

    Moreover, he cautioned against robot “humanoids” that have an uncanny ability to run after anyone.

    They had a discussion about London as the center of AI where it would be used in teaching.

    The chat also went sour at times as Mr Sunak acknowledged this “anxiety” in which some jobs can be taken away by robots, and they both agreed that there should be a “referee” to keep a tab on this super computer of the future.

    Mr,Musk, an AI investor and owner of auto maker Tesla is known for this, developing AI cars with self-driving capabilities. Nevertheless, he has raised some concerns regarding the dangers of AI in relation to the society as well as to humanity itself.

    He stated in his address to the audience that “there is a safety problem, particularly, with humanoid robots. At least, the car will not be following you into a building or up the tree.”

    In response to this, Mr. Sunak, who is enthusiastic about investing in UK’s developing tech industry, quipped, “Not doing very well your argument!”

    One does not expect to watch the president of this or any country have discussions on national issues with a man like Mr. Musk. Surprisingly, Mr. Sunak made his distinguished guest feel at home even though he was a guest in his party.

    This should not come as a surprise considering he had once lived in California, the place where Silicon Valley is located, and has been known for loving everything about technology.

    It happened as an affair in a great hall at Lancaster House, in the heart of London, where some personalities involved in tech. Amazingly, TV cameras were not allowed in and Downing Street instead issued their own footage.

    There was no Q&A session and some reporters who showed up were told in advance that their questions would not be entertained.

    Mr. Musk is a tech investor and an inventor who has invested in AI companies and fitted his self-driving Tesla cars with it. Still, he has revealed fear that AI might be detrimental not only to social norms but also to human race itself.

    The security issue is more apparent with case of humanoids. “Car at least doesn’t come after one to a building or up a tree,” he told a crowd.

    In reply, Mr. Sunak expressed his enthusiasm towards boosting investment in UK’s expanding tech sector with humor, “You are not sounding convincing”.

    One does not see every day the PM of a nation holding talks with somebody on the scale of Mr. Musk. Nevertheless, Mr. Sunak came across as a gentleman who hosted an influential visitor.

    There is nothing strange about that since he once lived in California, where Silicon Valley is located. Moreover, we know how fond he is of technology.

    It was hosted by the former PM’s secretary of state at Lancaster House in London and featured representatives from different sectors of the tech world. Surprisingly television cameras were prohibited and footage from Downing Street was issued out.

    However, others had reporters present, who however were instructed not to ask any queries.

    While engaged in discussion, they considered what artificial intelligence benefits might be available. Mr. Musk noted, “My one son finds it tough to make buddies so a AI pal would be very useful in this case.”

    They both agreed that AI can greatly help young people learn more than what any other tutor or teacher can do. Mr. Musk gave it more emphasis, calling the robot “the most patient of tutors.” Besides, they gave a wake-up call regarding the massive job losses to be expected

    Mr. Musk asserted, “we are seeing one of the most significant forces ever,” and he further explained, “there exists an age when there won’t be any jobs – AI will do everything. However, people will have their own jobs when they feel like.” That’s a double edged sword – it is also one of the things that will make sense finding of purpose in life in future.

    The benefits of utilizing artificial intelligence were also touched upon during their debate. Mr. Musk said that “one of my son is facing trouble with making friends and so, he can have an AI friend.”

    In addition, both parties agreed on what AI may mean for young people’s education, where Mr. Musk sees it as “the best and most patient tutor.” Nevertheless, they also raised concerns over its power to displace other job opportunities.

    As Mr. Musk put it, “This is what we call the most revolutionary force that ever exists, there is a time where even no job matters as AI takes care of everything.” He added, “At some point, people will be able to choose between having a job based on need and This is a dilemma for tomorrow – the problem of finding significance of life.”

    Amid all the deep discussions, there were few new revelations about how the technology will be used and regulated in the UK, except for the prime minister’s pledge to enhance the government’s website using AI.

    While Mr. Musk was one of the prominent guests at this week’s summit, it briefly seemed like the event with Mr. Sunak might be overshadowed. Just hours before it was set to begin, Mr. Musk used his platform X, formerly known as Twitter, to take a dig at the summit.

    As Mr. Sunak was wrapping up his final press conference at Bletchley Park, Mr. Musk shared a satirical cartoon about an “AI Safety Summit.” In the cartoon, caricatures representing the UK, European Union, China, and the US expressed concerns about the potential catastrophic risks of AI in their speech bubbles, while their thought bubbles revealed a desire to be the first to develop it.

    However, in the end, the two appeared to be at ease with each other, and Mr. Sunak, in particular, seemed in his element – perhaps even a bit starstruck by the controversial billionaire, whom he praised as a “brilliant innovator and technologist.”

    From the perspective of onlookers behind the tech world dignitaries, it was difficult to determine who held more power in this duo. Was it Mr. Sunak, who asked questions of the celebrity tech billionaire, or was it Mr. Musk, who did most of the talking?

    Regardless, both men aspire to influence the course of our AI future.

  • Artificial intelligence played a vital role in detecting a woman’s bowel cancer.

    A woman who participated in a study that utilized artificial intelligence (AI) to identify bowel cancer has been declared cancer-free after the disease was successfully detected and removed.

    One of these studies involved Jean Tyler, aged 75, from South Shields as part of a trial at ten NHS Trusts.

    During the trial, the AI pinpoints suspicious tissue that might go unnoticed on the part of the doctor performing the colonoscopy.

    There are about 2,000 patients recruited per ten trusts of NHS.

    Jean Tyler who was aged seventy five and was from South Shield; took part in the Colo-Detect study as part of a trail of ten NHS trusts.

    During the colonoscopy the medic might miss some abnormal tissues but not in the case when the art is used as the AI flags these tissues up as potentially of concern to the medic during the trial.

    The trial has also involved about 2,000 patients drawn from ten NHS trusts.

    About a year ago, the AI identified several polyps and a cancerous area during Mrs. Tyler’s colonoscopy as part of the trial. Following the AI’s detection, she underwent surgery at South Tyneside District Hospital and has since made a full recovery.

    “It was fantastic support, it was amazing,” she said.

    Last year, I visited approximately three times and I was so well taken care of.

    I agree to these research projects as I believe that they can improve conditions for everyone of us.

    Colin Rees, professor of gastroenterology at Newcastle University, was the lead investigator in this study among his team of co-investigators in South Tyneside and Sunderland NHS Trust.

    Trial sites include North Tees and Hartlepool NHS Foundation Trust, South Tees NHS Foundation Trust, Northumbria NHS Foundation Trust and Newcastle Upon Tyne Hospitals NHS Foundation Trust.

    “It was incredible of the support I got,” she added.

    It is one of a kind hospital, I visited it about seven times during the year and was so well treated.

    The reason is my saying yes to these research projects, since this can make life a whole lot for everyone.

    The research was directed by Professor Colin Rees who works as a gastroenterology consultant for the Newcastle University.

    Additionally, North Tees and Hartlepool NHS Foundation Trust, South Tees NHS Foundation Trust, Northumbria NHS Foundation Trust, and Newcastle Upon Tyne Hospitals NHS Foundation Trust are included in this trial.

    Professor Rees termed it “world leading” for improving the ability to detect using AI, which is probably going to be one of the main tools used by modern medicine.

    These results will be assessed as to what extent they may save life from colorectal cancer – the second most lethal cancer accounting for almost eighteen thousand death cases a year in the UK.

    They will publish their results probably in the autumn.

    Rees, a professor, commented that “the project has been world-leading in helping to improve detection”. In addition, he noted that AI is bound to be a powerful tool for medicine in the future.

    Bowel cancer is currently the second leading type of cancer, and it causes approximately 16,800 deaths every year in the UK. Hence, these findings will be studied to find out ways that may help save lives from its second biggest killer cancer.

    Publication is scheduled for this autumn.

  • It’s possible that we could see safer brain surgery with the help of AI within the next two years.

    According to a prominent neurosurgeon, we might witness the potential for brain surgery with the aid of artificial intelligence in as soon as two years, promising increased safety and efficacy.

    New AI technology is helping less experienced junior surgeons perform more accurate keyhole brain surgery through learning. Produced on the premises of the University College London, it enhances visualisation of minor tumours on the brain together with the vital blood vessels found in the centre.

    According to the government, this initiative could be “a real game changer” for healthcare in Britain.

    Newly invented AI technology is being exploited by trainee surgeons conducting sharper keyhole brain surgery. It was designed at University College London and depicts the most important parts to highlight, including tiny tumors in the center of brain and blood vessels.

    As reported by BBC News, the government says that “it could really be a game-changer” for the NHS (National Health Service) in Britain.

    Brain surgery demands meticulous precision, with even the slightest deviation having potentially fatal consequences. Avoiding harm to the pituitary gland, a mere grape-sized structure at the brain’s core, is of utmost importance. This gland governs the body’s hormonal balance, and any damage to it can lead to vision impairment.

    Hani Marcus, a consultant neurosurgeon at the National Hospital for Neurology and Neurosurgery, highlights the delicate balance required in the surgical approach: “If you go too small with your approach, then you risk not removing enough of the tumor. If you go too large, you risk damaging these really critical structures.”

    An AI system has scrutinized over 200 videos of pituitary surgery of this kind, achieving in just 10 months a level of expertise that would typically take a surgeon a decade to attain.

    “Even surgeons that are well-trained like me can use AI to pinpoint that line better than without its help,”says Mr Marcus.

    In a few years, you could have a better AI system than all humans seeing a lot of operations.

    It’s very useful to trainee Dr Nicola Newell.

    She adds “it assists me orient myself in the mock surgical situation, and it also shows me on which stage I am going”.

    As Mr Marcus says, “surgeons like myself – even though, you’re very experienced – with the aid of AI, we can perform better in order to identify the boundary”.

    Within a few years you may have an A.I.system who will have witnessed as many operations as a human will never even hear of, let alone witness!

    It is very helpful even with trainee Dr Nicole Newell.

    BBC

    According to her, it directs on the mock surgery and assists in determining on which level is the subsequent stage.

    According to Viscount Camrose, an AI government minister, artificial intelligence has the potential to significantly boost productivity across various fields, effectively turning individuals into an enhanced, superhero-like version of themselves. He believes that this technology could be a game-changer for healthcare, offering a very promising future and improving outcomes for everyone.

    Recently, the government allocated funding to 22 universities, including University College London (UCL), to drive innovation in healthcare within the UK. At UCL’s Welcome / Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences, engineers, clinicians, and scientists are collaborating on this transformative project.

  • Scientists are enthusiastic about an AI tool that assesses the severity of a rare form of cancer.

    According to a study, artificial intelligence appears to be almost twice as effective as the current method in grading the aggressiveness of a rare type of cancer from scans. Scientists have made this finding.

    Unlike the naked eye, AI was able to detect the details that a lab would only be 44% correct in identifying.

    This could help improve the treatment, leading to better services that would benefit thousands each year, say researchers from the Royal Marsden Hospital and the Institute of Cancer Research.

    It also excites them by its ability to identify breast cancer at an early stage.

    AI has proven itself to be very promising in terms of breast cancer diagnosis and reduction of treatment periods.

    The computers can be fed with a lot of information on which they will be able to identify patterns and be capable of making predictive actions, solving problems or even learning from their mistakes.

    The recognition of details invisible to the unaided eye, gave a 82 percent accuracy rate, which is better than that of lab analysis (at 44 percent).

    According to the researchers from the Royal Marsden Hospital and Institute of Cancer Research, it could enhance the therapy provided to thousands of men each year.

    Additionally, it makes other types of cancer detection at an early stage possible.

    Indeed with the development of artificial intelligence there are promising signs in diagnosing breast cancer and shortening treatment durations.

    The computers can receive huge amount with patterns that are used for forecasting, problem-solving, as well making mistakes.

    “We are very enthusiastic about the possibilities that this cutting-edge technology offers,” commented Professor Christina Messiou, who serves as a consultant radiologist at The Royal Marsden NHS Foundation Trust and holds the position of professor in imaging for personalized oncology at The Institute of Cancer Research in London.

    “We believe it has the potential to improve patient outcomes significantly, by enabling quicker diagnoses and more precisely tailored treatment options,” she added.

    Writing in Lancet Oncology, the reseachers employed radioonics technology that revealed previously undetectable manifestations of the pathology such as soft-tissue thickening and bone destruction for staging of retroperitoneal sarcoma.

    Such data helped the algorithms to rate the aggressiveness of 89 other European and US hospital patients’ tumors, from scans, more reliably than biopsies which involved a portion of tissues analyzed under the microscopes.

    Writing in Lancet Oncology, the researchers used radiomics for identification of subclinical features of retroperitoneal sarcoma on MRI and CT scans of 165 cases.

    By using these details, the artificial intelligence machine could assess the aggressiveness of another eighty-nine tumours from scan photos in European and American hospitals more precisely as compared with biopsies where only a little sample of malignant tissue is examined by microscope.

    Tina McLaughlan, a dental nurse from Bedfordshire, received her diagnosis of a sarcoma in her abdomen last June due to stomach pain. Doctors used computerized-tomography (CT) scan images to identify the issue and decided against a needle biopsy due to potential risks.

    Following the removal of the tumor, the 65-year-old now undergoes scans at the Royal Marsden every three months. While she wasn’t part of the AI trial, she expressed her belief to BBC News that it could greatly benefit other patients.

    “When you have your first scan, and they can’t give you a definite diagnosis – they didn’t inform me throughout my treatment until the post-op histology report, so having immediate results would be incredibly helpful,” said Ms. McLaughlan. “Hopefully, it would lead to faster diagnoses.”

    Each year, around 4300 people are diagnosed with this kind of cancer in England.

    The founder of this company believes that one can day use it across globe on high-risk patients whose individual treatment should be personalized as compared to those who have low risk that should not undergo other unnecessary treatment and scanning.

    Dr Paul Huang, from the Institute of Cancer Research, London, said: This type of technology can transform lives of patients with sarcoma, making it possible to develop personalised treatment protocol directed to the molecular profile of sarcoma.

    Such promising results – it’s great.

    Approximately four thousand three hundred people get diagnosed with this type of cancer every year in England.

    Prof Messiou thinks that technology will one day be applicable globally, and high risks patients will receive different treatment, while avoiding excess treatments and follow-ups scans for low risks patients.

    Dr Paul Huang, from the Institute of Cancer Research, London, said: Such technology can change the lives of sarcoma patients and tailor the treatment according to the specific cancer biology.

    “These results are certainly very promising.”

  • The UK has just unveiled an AI agreement, while Elon Musk issues a warning about the possibility of human extinction.

    The UK government has made an announcement about a ground-breaking agreement that addresses the management of the most high-risk AI technologies.

    This involves so called “frontier AI” – allegedly most advanced variants of AI with totally unrecognisable traits.

    At the UK’s AI Safety Summit, this agreement was made by participating countries such as the US, the EU, and China.

    Prior to the meeting, Elon Musk, an AI attendee, had a warning as he stated AI could result in total human annihilation.

    However, other participants who caution against anticipating improbable threat scenarios from space and warn that the world should rather consider what can be happening today with regards to how AI may replace some jobs or further reinforce prejudice.

    King Charles has also made a prerecorded statement that has classified invention of sophisticated artificial intelligence as equally crucial as the discovery of electricity.

    He said that “just like that we should act against AI risks with speed, unity and power”.

    In addition, the UK government has stated that 28 nations have agreed in the Bletchley Declaration – a document that signatories have signed on which they say there are significant needs to understand and jointly handle possible Artificial Intelligence risks.

    The prime minister, Rishi sunak says it is “a landmark decision where the planet’s best Ai powers see the urgency in uncovering the potential dangers of Ai – thereby ensuring future sustainability for our children and generations to come.”

    The necessity of a global framework has been emphasized by other nations as well.

    The ties between China and the west have their troubles in most areas, even though the Vice Minister Wu Zhaohui said that the country seeks a culture of transparency in artificial intelligence at the conference.

    According to him, delegates should work together across the globe to discuss and let people have access to AI tools.

    Called “frontier AI”, or perhaps not, but what ministers refer to as the ultimate stage of an advanced form of the technology that still has some unspecified potentialities..”

    At the UK’s AI safety summit, countries like the US, the EU and China signed a deal that was called “the agreement”.

    During this meeting, Elon Musk – an attendee and also Tesla’s owner warned that mankind may be wiped out by AI.

    However, other participants have cautioned of projecting unlikely future dangers and advised that the world should rather concentrate on likely current day dangers artificial intelligence could bring in, such as displacing work positions or consolidating prejudice.

    King Charles’s pre-recorded statement said that the advancement of AI is no less important than the discovery of electricity.

    We must also approach AI risk in the same manner as we would time and money he added.

    UK Government stated that in total 28 nations support Bletchey declaration that states about high level urgency in comprehending and managing artificial intelligence threats collectively.

    Prime Minister Rishi Sunak described it as a “milestone towards understanding the dangers and pitfalls associated with artificial intelligence; something which is important if we are to preserve our children and grandchildren’s legacy as well as safeguard their future”.

    Several other countries, like America has equally insisted on a world-wide approach towards controlling the technology.

    The vice minister Wu Zhaohua also added that relations between China and the West were tense on many issues—although, the country’s leadership is trying to make the industry as open as possible toward its competitors, including those from the West.

    He also told delegates that, “we need to work together to share knowledge on AI technologies with the public”.

    At the same time, Gina Raimondo, the US Secretary of Commerce, shared that the United States plans to establish its own AI Safety Institute after the summit.

    Michelle Donelan, the UK Technology Secretary, who presided over the initial statements, disclosed that the upcoming summit will be conducted in a virtual format and will be hosted by the Republic of Korea in six months.

    Following that, the next in-person gathering is scheduled to take place in France one year from now.

    Somebody else’s prediction about AI is Elon Musk who appeared in a podcast of joe Rogan saying some people might even use AI against humans for protecting the planet which may imply ending human life.

    The implication of it is if you start concluding that “humans are bad” then humans should die, he said.

    “If AI is created by extinctionists then their utility function would be Human extinction … they wouldn’t even realize that it was a bad thing.”

    The meeting with UK’s PM is scheduled on 31 January and he spoke prior to the UKAI safety summit.

    Numerous experts regard such warnings as exaggerated.

    The summit was attended by other notable person like Nick Clegg – the current global affairs vice-president in Meta who served as a deputy prime minister. He urged people not to be carried away with “sometimes speculative future predictions and predictions”.

    The AI doomsday prediction was made by Elon Musk who told comedian Joe Rogan’s podcast that some people would use AI to save the planet through eradication of human race on Tuesday.

    “If you begin feeling that all humans should perish then it would be wise to stop thinking this,” he said.

    Extinctionists could get their hands on a tool and program “AI”. Its utility function would become extinction of human race … such a thing wouldn’t dawn on them.

    The comment came before this week’s AI safety summit in the UK where the vice president will be discussing with UK’s PM.

    Indeed, such warnings are regarded by many experts as exaggerated.

    According to Nick Clegg, the present president of global affairs at Meta and former Deputy PM who is visiting the assembly, people should not allow “speculative, sometimes somewhat futuristic predictions” displace the more immediate challenges.

    Environmentalist being ready to do whatever it takes to shrink the size of humanity in the world is a stock issue for deniers of global warming popular social networks.

    In that regards, some people term climate activists as death cult people who would want to implement a population control measures including compulsory sterilization.

    None of these assertions has been proven true and it would be impossible to convince anybody within the mainstream climate scientists or environmentalists who support such measures.

    It is also true that the most populous countries in the world have among the lowest atmospheric emission levels per person.

    Since there is a popular view among climate change deniers on social media, which involves suggestions regarding environmentalists going to extreme lengths just in order to reduce the world’s population.

    To much of those users, the climate change activists comprise a sect that is known as “the death cult” seeking population reduction by draconian methods including compulsory abstinence through forced sterilization.

    None of these statements has been backed by scientific data, and you would surely find few environmentalists or scientists willing to support these initiatives as part of climate change mitigation.

    In this case, while there has been an increased pressure on the world’s natural resources as populations increase, some of the biggest countries in terms of population size have recorded few emissions per capita.

    A common viewpoint among many is that the most significant concern regarding AI lies in the automation of jobs, as well as the potential for embedding existing biases and prejudices into new, greatly enhanced online systems.

  • Oxford University has created an AI tool designed to monitor and trace virus variants.

    Researchers have developed an artificial intelligence tool that can forecast the emergence of new virus variants.

    According to researchers from the University of Oxford and Harvard Medical School, the model could have predicted the evolution of covid-19 strains on mutation during the pandemic.

    The model was christened EVEscape and should assist immunologists with designing new vaccine strategies based on how viruses adapt in order to survive a response to the human immune defense system.

    Oxford University branded the tech as “predicting the future”.

    The various variants developed through mutations caused different epidemic waves during the COVID-19 pandemic.

    Such mutations can change how the virus behaves leading to ease of contagion or resistance to the immune system to clear it from the body.

    However, it did not result into a significant increase in hospitalizations and fatalities, in late 2021 when the Omicron variant infected many people.

    EVEscape or evolutionary model for variant effects incorporates a neural network for predicting evolution and specific structural information on a particular virus’s behavior.

    The research team wrote in the paper of Nature how it works by stating that this mutation predicts the possibility that the virus will be able to overcome the blockade of antibodies and other immune responses.

    University of Oxford and researchers Harvard medical school say it would be possible if this model was used to predict mutation of the Covid-19 during the pandemic.

    The model called “EVEscape” is expected to provide useful information for the vaccine design and it also studies the mutation in HIV viruses due to the human immune system.

    According to UK’s University of Oxford, it was “predicting the future”.

    The wave during the covid19 pandemic was driven by various viral mutations.

    Mutations in those genes can change how the virus behaves, perhaps causing it to spread more rapidly and allowing us to recognize and counteract it.

    Towards the end of 2021, the Omicron variant did precisely the same as it infected millions, but without an overwhelming surge for admissions and fatalities reported.

    The name “EVE” is an abbreviation for a complex evolutionary model that incorporates a deep neural network and comprehensive structural viral data.

    The researchers described in Journal nature how it operates through forecast of probability that a viral mutant can evade antibody interference or other immune surveillance mechanisms.

    The researchers tested the model using data available only at the beginning of the Covid-19 pandemic in February 2020. Impressively, it successfully anticipated which mutations of SARS-CoV-2 would emerge and become more common.

    The team’s AI tool also foresaw which antibody-based treatments would lose their effectiveness as the pandemic evolved and the virus developed mutations to evade these therapies.

    This technology holds promise for aiding in preventive measures and the development of vaccines that can target concerning variants before they gain prominence.

    Pascal Notin, one of the study’s co-lead authors, mentioned that if this tool had been used at the outset of the pandemic, it would have accurately predicted the most prevalent Covid-19 mutations. He also highlighted the significant value of this work for pandemic surveillance and its potential to inform robust vaccine design in the face of emerging high-risk mutations.

  • Is artificial intelligence on the verge of revolutionizing the field of law?

    Is artificial intelligence on the verge of revolutionizing the field of law?

    If there were a court case to decide whether society should adopt or reject artificial intelligence (AI), it’s probable that the jury would be deadlocked.

    It seems that no one can determine that the advantages, including automated writing tasks, and searching huge information in seconds are greater than their disadvantages consisting of unreliable data, and insufficient precision and responsibility.

    AI can serve as both advantage and disadvantage for the legal profession by itself. The same report in the UK’s Law Society says it could mean “savage reduction” in human jobs.

    Similarly, a report this year revealed that a study conducted from the University of Pennsylvania, New York and Princeton stated that the legal profession would be the first to be disrupted by artificial intelligence.

    Additionally, AI can serve as a very important support tool during investigations and case development. The precedence does exist of things gone absolutely awful.

    A recent case involves a New York attorney Steven Schwartz who got in trouble during this year because of his own lawsuit, while being engaged in a lawsuit in which a man claimed personal injuries because of an airline company. The AI had fabricated six out of the seven cases he used.

    This would have made some law firms reluctant but Ben Allgrove, the chief innovation officer at international law firm Baker McKenzie holds a different opinion.

    He says, “I don’t think that this is a technology story, but this is a lawyer’s story.” “The first obstacle is his professional inelegance and unethical behaviour. After that comes the fact that it wasn’t an appropriate tool.”

    This is perhaps, an insurmountable obstacle that no one can determine if the advantages – e.g., writing tasks automation and sorting through tons of data within second overcomes its cons like the biased data, and lacks of precision and responsibility.

    AI can be seen as an opportunity or a threat to the legal profession. As per a 2021 report by the Law Society of the UK, this could result into a “savage reduction” in positions available for workers.

    Moreover, a research carried out this year from Penn State universities predicted that the legal industry will face the biggest changes caused by artificial intelligence.

    However, AI also has great potential value in terms of research and building of the argument. Despite instances of things getting incredibly bad.

    This year, therefore, New York law attorney Steven Schwartz was forced to defend himself before the law in a case where he applied an artificial general intelligence (AGI) named ChatGPT in order to search for precedent upon which to base his arguments in a litigation where a man was Five out of the seven cases were entirely fabricated by the AI.

    That may not have persuaded some law firms to engage into such systems, but it will be viewed differently by Ben Allgrove a chief innovation officer for an international law firm Baker McKenzie.

    “I’m not sure if its a tech story. It’s a story about lawyers,” he said. Deal with the unprofessionalism of Mr. Schwartz and the ethics, and it goes further down to the fact that he should not have used it as a tool.

    Image credit by GETTY IMAGES

    Since 2017, Baker McKenzie has been following different advances made via AI and today it has assembled a group of attorneys, data scientists and data engineers tasked with testing such new solutions which come into this market.

    Mr Allgrove believes that most of the AI use in his company will be using the new AI-enabled versions of current legal software providers such as LexisNexis and Microsoft’s 365 solution for Legal.

    The company LexisNexis had already begun using its Artificial Intelligence (AI) platform that month, which allows answering legal questions, drafting documents, and summarising the legal issues. On the other hand, Microsoft’s AI-tool, Copilot will go live commercially for a few dollars above 365 a month beginning next month.

    We are already using LexisNexis and Microsoft and they’ll get a lot of capabilities based on generative AI. We should buy these things when they work and at appropriate prices.

    The AI that everyone is buzzing about is generative AI. The AI is able to generate text, pictures or even music depending on what it was taught.

    However, at this point in time, the premium or paid for version of such tool is very costly. According to “Mr Allgrove”, the cost of paying for Microsoft’s Copilot service alone “would double our technology spend”.

    In such case, law firms have an option of paying low amounts towards the artificial intelligence (AI) systems that do not target the legal market, including but not limited to Google’s Bard, Meta’s Llama, and OpenAI’s ChatGPT. They would connect to these platforms and customize them to suit their respective legal contexts.

    Several trials are already being carried out by Baker McKenzie. As per Mr Allgrove, “we go into the market to say that these models must be used in the testing stage.”

    Instead, the lawyers can pay less money on AI-based system not developed for legal market (e.g., google’s bard, meta’s llama or openAI’s chatGPT). Such platforms would allow the firms to interface them and make necessary adjustments so that they can serve their legal ends

    Currently, several tests are being carried out at Baker McKenzie. “We walk into a market, say we need to test how these models work,” Mr Allgrove said.

    He explained that such testing is important in order to “authenticate performance,” as all of the systems are going to err.

    Robin AI is a legal software system which relies on an “AI co-pilot” for faster processes, such as drafting and interpreting contracts, whether for in-house lawyers or individuals.

    It uses mainly an AI developed by a company named Anthropic. A former vice president of research at OpenAI established this, and it has been supported by an investment from Google.

    However, RobinAI has come up with their own AI models which they use in studying the finer points of contracts’ law. It stores, labels, and utilizes every single contract that it has as a means of education.

    This signifies that the company has accumulated lots of contracts in their database which Karolina Lukoszova, co-head of legal and product in UK based Robin AI opines will underpin adoption of AI in law profession.

    “Companies will need to train their own smaller models on their own data within the company,” she explains. “This approach will yield superior results and create a clear boundary of protection.”

    To ensure the accuracy of information, RobinAI employs a team of human lawyers who work in tandem with the AI.

    Alex Monaco, an employment lawyer who manages his own solicitor practice and a tech firm called Grapple, has developed Grapple to offer the public what he refers to as “an ontology of employment law.” It provides guidance on various workplace issues, such as bullying, harassment, and redundancy. The platform can generate legal letters and provide case summaries.

    He’s enthusiastic about AI’s potential to democratize the legal profession. “Around 95% of the inquiries we receive come from people who simply can’t afford lawyers,” Mr. Monaco remarks.

    However, due to the wide availability of free AI tools, people can now assemble their legal cases. Anyone with an internet connection can use platforms like Bard or ChatGPT to assist in drafting legal letters. While it may not match the quality of a letter composed by a lawyer, it comes at no cost.

    “AI isn’t replacing humans, and it’s not replacing lawyers. What it’s doing is enhancing people’s comprehension and execution of their legal rights,” he asserts.

    He also emphasizes that in a world where AI is widespread, this can be highly significant.

    Some employees claim that companies and corporations are employing AI in their operations as a mechanism of hiring and firing them with the help of CV profiling, AI-based restructurations, mass dismissals, among others and against the ordinary employee.

    Some are already encountering their own legal issues though the use of AI in law is still a new concept.

    Donotpay company that describes itself as “the world’s first robot lawyer”, promising to deal with parking tickets, as well as other citizen claims through AI software, has faced several lawsuits including one that accuses it of unauthorized legal practice.

    However, it has come to pass in the USA in which senior judges have mandated that all lawyers should state in their respective courts filing their use of AI or not due to a similar case involving Steven Schwartz.

    However, according to Mr Monaco, it will be hard to define it and enforce the same as well.

    Google incorporates artificial intelligence into its search engine algorithm and now its utilizes bards. So even when you “google” anything its AI that is helping you with your legal research.