The UK’s innovative approach to AI regulation has made us a world leader in both AI safety and AI development.
I am personally driven by AI’s potential to transform our public services and the economy for the better — leading to new treatments for cruel diseases like cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future.
AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.
We are very excited to roll it out to further enhance and expand our community’s daily communication.
Zhen Fang, Roblox
"A website called OnlyFake is claiming to use “neural networks” to generate realistic looking photos of fake IDs for just $15, radically disrupting the marketplace for fake identities and cybersecurity more generally," reported 404Media.
"In our own tests, OnlyFake created a highly convincing California driver's license, complete with whatever arbitrary name, biographical information, address, expiration date, and signature we wanted," reported 404Media.
404 Media then used another fake ID generated by the service to successfully step through the KYC process on OKX exchange.
"The service claims to use 'generators' which create up to 20,000 documents a day. The service’s owner, who goes by the moniker John Wick, told 404 Media that hundreds of documents can be generated at once using data from an Excel table."
Wick also told 404 Media their service could be used to bypass verification at a host of sites, including Binance, Revolut, Wise, Kraken, Bybit, Payoneer, Huobi, Airbnb, OKX and Coinbase, adding that it shouldn't be used for the purpose of forging documents.
"OnlyFake offers other technical measures to make its churned out images even more convincing. The service offers a metadata changer because identity verification services, or people, may inspect this information to determine if the photo is fake or not," was stated in the article.
The development of artificial intelligence systems is transitioning from creating static, task-specific models to dynamic, agent-based systems capable of performing well in a wide range of applications. We propose an Interactive Agent Foundation Model that uses a novel multi-task agent training paradigm for training AI agents across a wide range of domains, datasets, and tasks. Our training paradigm unifies diverse pre-training strategies, including visual masked auto-encoders, language modeling, and next-action prediction, enabling a versatile and adaptable AI framework. We demonstrate the performance of our framework across three separate domains -- Robotics, Gaming AI, and Healthcare. Our model demonstrates its ability to generate meaningful and contextually relevant outputs in each area. The strength of our approach lies in its generality, leveraging a variety of data sources such as robotics sequences, gameplay data, large-scale video datasets, and textual information for effective multimodal and multi-task learning. Our approach provides a promising avenue for developing generalist, action-taking, multimodal systems.
Subjects: | Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Robotics (cs.RO) |
Cite as: | arXiv:2402.05929 [cs.AI] |
(or arXiv:2402.05929v1 [cs.AI] for this version) | |
[2402.05929] An Interactive Agent Foundation Model Focus to learn more |
Depending on where you work, there’s a significant chance that artificial intelligence is analyzing your messages on Slack, Microsoft Teams, Zoom
and other popular apps.
Huge U.S. employers such as Walmart, Delta Air Lines, T-Mobile, Chevron and Starbucks, as well as European brands including Nestle and AstraZeneca, have turned to a seven-year-old startup, Aware, to monitor chatter among their rank and file, according to the company.
Jeff Schumann, co-founder and CEO of the Columbus, Ohio-based startup, says the AI helps companies “understand the risk within their communications,” getting a read on employee sentiment in real time, rather than depending on an annual or twice-per-year survey.
Using the anonymized data in Aware’s analytics product, clients can see how employees of a certain age group or in a particular geography are responding to a new corporate policy or marketing campaign, according to Schumann. Aware’s dozens of AI models, built to read text and process images, can also identify bullying, harassment, discrimination, noncompliance, pornography, nudity and other behaviors, he said.
Aware’s analytics tool — the one that monitors employee sentiment and toxicity — doesn’t have the ability to flag individual employee names, according to Schumann. But its separate eDiscovery tool can, in the event of extreme threats or other risk behaviors that are predetermined by the client, he added.
Aware said Walmart, T-Mobile, Chevron and Starbucks use its technology for governance risk and compliance, and that type of work accounts for about 80% of the company’s business.
Jutta Williams, co-founder of AI accountability nonprofit Humane Intelligence, said AI adds a new and potentially problematic wrinkle to so-called insider risk programs, which have existed for years to evaluate things like corporate espionage, especially within email communications.
Speaking broadly about employee surveillance AI rather than Aware’s technology specifically, Williams told CNBC: “A lot of this becomes thought crime.” She added, “This is treating people like inventory in a way I’ve not seen.”
Aware’s revenue has jumped 150% per year on average over the past five years, Schumann told CNBC, and its typical customer has about 30,000 employees. Top competitors include Qualtrics, Relativity, Proofpoint, Smarsh and Netskope.