bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

China approves over 40 AI models for public use in past six months​

By Josh Ye

January 28, 202411:59 PM ESTUpdated 9 hours ago

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai

[1/2]An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo Acquire Licensing Rights, opens new tab

HONG KONG, Jan 29 (Reuters) - China has approved more than 40 artificial intelligence (AI) models for public use in the first six months since authorities began the approval process, as the country strives to catch up to the U.S. in AI development, according to Chinese media.

Chinese regulators granted approvals to a total of 14 large language models (LLM) for public use last week, Chinese state-backed Securities Times reported. It marks the fourth batch of approvals China has granted, which counts Xiaomi Corp (1810.HK), opens new tab, 4Paradigm (6682.HK), opens new tab and 01.AI among the recipients.

Beijing started requiring tech companies to obtain approval from regulators to open their LLMs to the public last August. It underscored China's approach towards developing AI technology while striving to keep it under its purview and control.

Beijing approved its first batch of AI models in August shortly after the approval process was adopted. Baidu (9888.HK), opens new tab, Alibaba (9988.HK), opens new tab and ByteDance were among China's first companies to receive approvals.

Chinese regulators then granted two more batches of approvals in November and December before another batch was given the greenlight this month. While the government has not disclosed the exact list of approved companies available for public checks, Securities Times said on Sunday more than 40 AI models have been approved.

Chinese companies have been rushing to develop AI products ever since OpenAI's chatbot ChatGPT took the world by storm in 2022.

At the time, China had 130 LLMs, accounting for 40% of the global total and just behind the United States' 50% share, according to brokerage CLSA.

One of China's leading ChatGPT-like chatbots, Baidu's Ernie Bot, has garnered more than 100 million users, according to the company's CTO in December.

(This story has been refiled to restore dropped words in paragraph 1)

Reporting by Josh Ye; Editing by Shri Navaratnam
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

Samsung builds entirely AI-powered chip factory; humans no longer needed​

January 25, 2024 at 11:11 am EST

Copyright © Business AMBE 2023

Summary. Samsung Electronics plans to fully automate its semiconductor factories by 2030, where "smart sensors" will control the manufacturing process. The world's largest maker of memory chips wants to create an "artificial intelligence factory" that operates without human labor.

This innovation could revolutionize the semiconductor industry, potentially leading to increased efficiency and reduced costs. That transition also highlights ethical issues about AI's role in the industry.

In the news. Samsung is working on a "Smart Sensing System" to improve profitability and change factory dynamics.

  • The company has plans to fully automate its manufacturing facilities by 2030.

Zoom in. Samsung has indicated since last summer that it wants to use AI to optimize integrated circuit (IC) design, material development, manufacturing, yield improvement and packaging.

  • Identifying the cause of defects in the manufacturing process is also a top priority of the AI plan.

    • Smart sensors are being developed for this purpose. These measure plasma uniformity and detect defects in real time, which is essential for processes such as deposition, etching and cleaning.
  • Samsung is also moving from dependence on foreign suppliers to in-house development, in its quest for technological independence.
Zoom out. Samsung's initiative is indicative of a broader trend toward automation and AI in the manufacturing sector.


  • But the shift toward AI and automation raises questions about the future of employment and the role of human workers in an increasingly technological world.
  • Several studies predict a labor market tsunami as more and more companies introduce Artificial Intelligence (AI). These predict the disappearance of 1 in 4 to even 1 in 3 jobs. Fortunately, a Deutsche Bank study counters this.
© The Content Exchange, source News
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739
AI in practice

Jan 14, 2024

China's race to dominate AI may be hindered by its censorship agenda​

DALL-E 3 prompted by THE DECODER

China's race to dominate AI may be hindered by its censorship agenda


Matthias Bastian
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.

Profile

E-Mail

Peter Gostev, Head of AI at Moonpig, found an easy way to get a Chinese language model (LLM) to talk about taboo topics like the Tiananmen Square incident.

Gostev manipulated DeepSeek's public chatbot by mixing languages and swapping out certain words. He would reply in Russian, then translate his message back into English, tricking the AI into talking about the events in Tiananmen Square. Without this method, the chatbot would simply delete all messages on sensitive topics, Gostev said.

Video Player
00:00

00:16


Video: Peter Gostev via LinkedIn

Gostev's example illustrates China's dilemma of wanting to be a world leader in AI, but at the same time wanting to exert strong control over the content generated by AI models (see below).

Controlling the uncontrollable​

But if the development of language models has shown one thing, it is that they cannot be reliably controlled. This is due to the random nature of these models and their massive size, which makes them complex and difficult to understand.

Even the Western industry leader OpenAI sometimes exhibits undesirable behavior in its language models, despite numerous safeguards.

In most cases, simple language commands, known as "prompt injection," are sufficient - no programming knowledge is required. These security issues have been known since at least GPT-3, but until now, no AI company has been able to get a handle on them.

Simply put, the Chinese government will eventually realize that even AI models it has already approved can generate content that contradicts its ideas.

How will it deal with this? It is difficult to imagine that the government will simply accept such mistakes. But if it doesn't want to slow AI progress in China, it can't punish every politically inconvenient output with a model ban.

China's regulatory efforts for large AI models​

The safest option would be to ban all critical topics from the datasets used to train the models. The government has already released a politically approved dataset for training large language models, compiled with the Chinese government in mind.

However, the dataset is far too small to train a capable large language model on its own. Political censorship would therefore limit the technical possibilities, at least at the current state of the technology.

If scaling laws continue to apply to large AI models, the limitation of data material for AI training would likely be a competitive disadvantage.

At the end of December, China released four large generative AI models from Alibaba, Baidu, Tencent, and 360 Group that had passed China's official "Large Model Standard Compliance Assessment."

China first released guidelines for generative AI services last summer. A key rule is that companies offering AI systems to the public must undergo a security review process, in which the government checks for political statements and whether they are in line with the "core values of socialism."

Summary
  • Peter Gostev, head of AI at Moonpig, manipulated a Chinese chatbot to discuss taboo topics like the Tiananmen incident. All he had to do was mix the languages and change certain words.
  • This example illustrates China's dilemma: it wants to be a global leader in AI, but it also insists on tight control over the content generated by AI models.
  • Despite regulatory efforts and politically coordinated datasets for training large language models, the Chinese government will inevitably be confronted with unwanted content and will need to find a way to deal with it without slowing down AI progress in the country.

Sources

LinkedIn Peter Gostev
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739
AI in practice

Jan 28, 2024

Midjourney's V6 update brings Pan, Zoom, Vary and broader website access​


Matthias Bastian

Midjourney has added pan, zoom, and vary (Region) to its V6. The pan feature now acts more like zoom, promising higher quality results with "more coherence and less repetition of concepts". It also works with upscale, vary (region), and remix, but doesn't infinitely increase resolution anymore. These options are available under V6 bot upscales (Discord) and on the alpha website (except vary region). Additionally, Midjourney's alpha website with image creation is now accessible to anyone who has created at least 5000 images on the platform. You can check how many images you have created with "/info" on Discord. Midjourney has also implemented a new feedback feature to help its engineers prioritize their work based on user feedback.

You have read 7 of our articles this month. Thank you for your interest!

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:

Bank transfer

Sources


Discord
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739
AI in practice

Jan 26, 2024

Taiwan's "Trustworthy AI Dialogue Engine" aims to counter China's grip on AI ecosystem​

DALL-E 3 prompted by THE DECODER

Taiwan's

Matthias Bastian

Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Profile
E-Mail



Taiwan is developing its own AI language model, called the Trustworthy AI Dialogue Engine (Taide), to counter China's influence in the AI ecosystem.

Taide is designed to provide businesses, banks, hospitals, and government agencies with a platform for natural language tasks such as composing emails and summarizing meetings.

It could also help Taiwan emancipate itself from Chinese LLMs, whose output is heavily controlled by the Chinese government. Taide runs entirely on Taiwanese servers.

The project, led by National Chiao Tung University, uses licensed content from local media and government agencies in traditional Chinese characters. Technically, the AI model is based on Meta's Llama 2, which is fine-tuned using this local data.

One advantage of this approach is that the content is written in the traditional Chinese characters used by the island's 23 million people, rather than the simplified version commonly used in China.

An initial version of Taide will be made available to selected partners for testing in April.

Independence before performance​

According to Bloomberg, Taiwan has earmarked a total of $555.6 million to develop AI expertise and tools by 2026. The developers hope their tool will help strengthen Taiwan's role in the AI software industry.

Taide's developers believe the model does not need to match the performance of industry leaders to be effective. Local companies should use it as a prototype and refine it for specific applications.

Traditionally, Taiwan focuses on the hardware side of the AI industry and has been particularly dependent on major US customers such as Nvidia. The model's development could also be an opportunity for Taiwan to move up the AI value chain and become more self-reliant.


Summary
  • Taiwan is developing its own AI language model, called the Trustworthy AI Dialogue Engine (Taide), to become more independent of China's AI ecosystem and avoid political interference.
  • Taide aims to provide a platform for businesses and government agencies to facilitate tasks such as composing emails and summarizing meetings. The model runs entirely on Taiwanese servers.
  • An initial version of the project, led by National Chiao Tung University, will be made available to selected partners for testing in April.
Sources

Bloomberg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

Arc Search combines browser, search engine, and AI into something new and different​


‘Browse for me’ is the new search engine, at least if The Browser Company is right about the future of the internet.

By David Pierce, editor-at-large and Vergecast co-host with over a decade of experience covering consumer tech. Previously, at Protocol, The Wall Street Journal, and Wired.

Jan 28, 2024, 7:35 PM EST|102 Comments / 102 New

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.


Arc_Search.jpg


Arc Search browses the web for you and then builds you the webpage you wanted. That’s the idea, anyway.

Image: The Browser Company / David Pierce

A few minutes ago, I opened the new Arc Search app and typed, “What happened in the Chiefs game?” That game, the AFC Championship, had just wrapped up. Normally, I’d Google it, click on a few links, and read about the game that way. But in Arc Search, I typed the query and tapped the “Browse for me” button instead.

Arc Search, the new iOS app from The Browser Company, which has been working on a browser called Arc for the last few years, went to work. It scoured the web — reading six pages, it told me, from Twitter to The Guardian to USA Today — and returned a bunch of information a few seconds later. I got the headline: Chiefs win. I got the final score, the key play, a “notable event” that also just said the Chiefs won, a note about Travis Kelce and Taylor Swift, a bunch of related links, and some more bullet points about the game.

Basically, instead of returning a bunch of search queries about the Chiefs game, Arc Search built me a webpage about it. And somewhere in there is The Browser Company’s big idea about the future of web browsers — that a browser, a search engine, an AI chatbot, and a website aren’t different things. They’re all just parts of an internet information finder, and they might as well exist inside the same app.

Arc Search is part of a bigger shift for the Arc browser, too. The company’s mobile app has, until now, been mostly a companion app to the desktop, a way to access your open tabs and not much else. With Arc starting to roll out to Windows users, The Browser Company is also getting ready to roll out its own cross-platform syncing system, called Arc Anywhere, and to bring some of these AI-powered features to Arc on other platforms. (Eventually, CEO Josh Miller says, Arc Search will just be called Arc and will be the company’s only mobile app.)

The “Browse for me” feature isn’t perfect, but it’s pretty impressive. When I search “What’s Pete Davidson up to?” for instance, it gives me some broad-strokes information about his recent film and breakup news (very useful!), links to his Wikipedia page, and a couple of news sites’ tag pages for Pete Davidson (meh), and then a bunch of information about his recent personal and professional goings-on. Like many AI tools, Arc Search isn’t great at citing its sources, so I can’t completely trust that Davidson and Chase Sui Wonders actually broke up, but there is a “Dive Deeper” section at the bottom with a bunch of links. Most of those links are the same generic stuff, like a “Pete Davidson’s net worth” webpage that I’m confident isn’t right, but there’s good stuff here, too.

The system has improved a lot even in the time I’ve been testing the app, and Miller says there’s plenty more room for Arc Search to get smarter. (The underlying AI models come from a mix of OpenAI and others.) Arc has been deeply invested in AI for a while, and some of its Arc Max features have been a hit with users. As the whole industry of generative AI tools improves, so will Arc.

I like Arc Search as a browser, too — it’s simple and fast and always opens to an empty search box, which feels right on mobile. But it does put The Browser Company in the middle of a lot of complicated AI discussions. Will the company work with the publishers whose information it’s using to populate these answers? How will Arc’s AI cite its sources? How personalized should these things be? How personalized can they be? A search like this is bound to be expensive; will Arc Search be a paid product over time? The company hasn’t shared much about its plans on these fronts yet, but there are a lot of questions to be answered.

But from a pure product perspective, this feels closer to the way AI search should work than anything I’ve tried. Products like Copilot and Perplexity AI are cool, but they’re fundamentally just chatbots with web access. Arc Search imagines something else entirely: AI that explores websites by building you a new one every time you ask.

Next week, millions of people will start typing the internet’s favorite question: “What time does the Super Bowl start?” I already know because I clicked “Browse for me,” and I also now know where the game is, how to watch it, and to be sure to clear my schedule for Usher at halftime. That’s a pretty good search result.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

WILL KNIGHT

BUSINESS

JAN 26, 2024 5:30 PM



OpenAI and Other Tech Giants Will Have to Warn the US Government When They Start New AI Projects​

The Biden administration is using the Defense Production Act to require companies to inform the Commerce Department when they start training high-powered AI algorithms.

Four people sitting in front of an audience in a conference room

(L-R) Fei-Fei Li, Condoleezza Rice, Gina Raimondo, and Miriam Vogel speaking at the Hoover Institute on January 26, 2024.COURTESY OF DEPARTMENT OF COMMERCE

When OpenAI’s ChatGPT took the world by storm last year, it caught many power brokers in both Silicon Valley and Washington, DC, by surprise. The US government should now get advance warning of future AI breakthroughs involving large language models, the technology behind ChatGPT.

The Biden administration is preparing to use the Defense Production Act to compel tech companies to inform the government when they train an AI model using a significant amount of computing power. The rule could take effect as soon as next week.

The new requirement will give the US government access to key information about some of the most sensitive projects inside OpenAI, Google, Amazon, and other tech companies competing in AI. Companies will also have to provide information on safety testing being done on their new AI creations.

OpenAI has been coy about how much work has been done on a successor to its current top offering, GPT-4. The US government may be the first to know when work or safety testing really begins on GPT-5. OpenAI did not immediately respond to a request for comment.

"We're using the Defense Production Act, which is authority that we have because of the president, to do a survey requiring companies to share with us every time they train a new large language model, and share with us the results—the safety data—so we can review it,” Gina Raimondo, US secretary of commerce, said Friday at an event held at Stanford University’s Hoover Institution. She did not say when the requirement will take effect or what action the government might take on the information it received about AI projects. More details are expected to be announced next week.

The new rules are being implemented as part of a sweeping White House executive order issued last October. The executive order gave the Commerce Department a deadline of January 28 to come up with a scheme whereby companies would be required to inform US officials of details about powerful new AI models in development. The order said those details should include the amount of computing power being used, information on the ownership of data being fed to the model, and details of safety testing.

The October order calls for work to begin on defining when AI models should require reporting to the Commerce Department but sets an initial bar of 100 septillion (a million billion billion or 1026) floating-point operations per second, or flops, and a level 1,000 times lower for large language models working on DNA sequencing data. Neither OpenAI nor Google have disclosed how much computing power they used to train their most powerful models, GPT-4 and Gemini, respectively, but a congressional research service report on the executive order suggests that 1026 flops is slightly beyond what was used to train GPT-4.

Raimondo also confirmed that the Commerce Department will soon implement another requirement of the October executive order requiring cloud computing providers such as Amazon, Microsoft, and Google to inform the government when a foreign company uses their resources to train a large language model. Foreign projects must be reported when they cross the same initial threshold of 100 septillion flops.

Raimondo’s announcement comes on the same day that Google touted the release of new data highlighting the prowess of its latest artificial intelligence model, Gemini, showing it surpassing OpenAI’s GPT-4, which powers ChatGPT, on some industry benchmarks. The Commerce Department may get early warning of Gemini’s successor, if the project uses enough of Google’s ample cloud computing resources.

Rapid progress in the field of AI last year prompted some AI experts and executives to call for a temporary pause on the development of anything more powerful than GPT-4, the model currently used for ChatGPT.

Samuel Hammond, senior economist at the Foundation for American Innovation, a think tank, says a key challenge for the federal government is that a model does not necessarily need to surpass a compute threshold in training to be potentially dangerous.


Dan Hendrycks, director of the nonprofit Center for AI Safety, says the requirement is proportionate given recent developments in AI and concerns about its power. “Companies are spending many billions on AI training, and their CEOs are warning that AI could be superintelligent in the next couple of years,” he says. “It seems reasonable for the government to be aware of what AI companies are up to.”

Anthony Aguirre, executive director of the Future of Life Institute, a nonprofit dedicated to ensuring transformative technologies benefit humanity, agrees. “As of now, giant experiments are running with effectively zero outside oversight or regulation,” he says. “Reporting those AI training runs and related safety measures is an important step. But much more is needed. There is strong bipartisan agreement on the need for AI regulation and hopefully congress can act on this soon.”

Raimondo said at the Hoover Institution event Friday that the National Institutes of Standards and Technology (NIST) is working to define standards for testing the safety of AI models as part of the creation of a new US government AI Safety Institute. Determining how risky an AI model is typically involves probing a model to try and evoke problematic behavior or output, a process known as “ red teaming.”

Raimondo said that her department is working on guidelines that will help companies better understand the risks that might lurk in the models they are hatching. These guidelines could include ways of ensuring AI cannot be used to commit human rights abuses, she suggested.

The October executive order on AI gives the NIST until July 26 to have those standards in place, but some working with the agency say that it lacks the funds or expertise required to get this done adequately.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739












 
Last edited:
Top