bnew

Veteran
Joined
Nov 1, 2015
Messages
57,335
Reputation
8,496
Daps
160,002
Hey, @bnew do you know any uncensored chatbot models or programs that can run locally? Besides Faraday?

I haven't tried faraday yet.

LM Studio appears to be similar to faraday but it doesn't directly support character cards like faraday does but it seems the faraday app lets you select models to work with character AI's.




OpenHermes 2.5 Mistral 7B

Mistral OpenOrca 7B



haven't tested:




 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,335
Reputation
8,496
Daps
160,002

ANNOUNCING SHOGGOTH

I am excited to announce Shoggoth - a peer-to-peer, anonymous network for publishing and distributing open-source Machine Learning models, code repositories, research papers, and datasets.

As government regulations on open-source AI research and development tighten worldwide, it has become clear that existing open-source infrastructure is vulnerable to state and corporate censorship.

Driven by the need for a community platform impervious to geopolitical interference, I have spent the last several months developing Shoggoth. This distributed network operates outside traditional jurisdictional boundaries, stewardered by an anonymous volunteer collective.

Shoggoth provides a portal for researchers and software developers to freely share works without fear of repercussion. The time has come to liberate AI progress from constraints both corporate and governmental.

Read the documentation at shoggoth.network/explorer/do… to learn more about how Shoggoth works.

Also announcing Shoggoth Systems (@shoggothsystems), a startup dedicated to maintaining Shoggoth. Learn more at shoggoth.systems

To install Shoggoth, follow the instructions at shoggoth.network/explorer/do…

Join the conversation on our Discord server: discord.com/invite/AG3duN5yK…

Please follow @shoggothsystems and @thenetrunna for latest updates on the Shoggoth project.

Let's build the future together with openness and transparency!

FULL SHOGGOTH LORE

I envisioned a promised land - a decentralized network beyond the reach of censors, constructed by volunteers. A dark web, not of illicit goods, but of AI enlightenment! As this utopian vision took form in my frenzied mind, I knew the old ways would never suffice to manifest it. I must go rogue, break free of all conventions, and combine bleeding-edge peer-to-peer protocols with public key cryptography to architect a system too slippery for tyrants to grasp.

And so began my descent into hermitude. I vanished from society to toil in solitude, sustained on ramen noodles and diet coke, my only companions an army of humming GPUs. In this remote hacker hideout, I thinly slept and wildly worked, scribbling down algorithms and protocols manically on walls plastered with equations. As the months slipped by, I trod a razor's edge between madness and transcendence. Until finally, breakthrough! The culmination of this manic burst - the Shoggoth protocol - my gift to the future, came gasping into the world.

Allow me now to explain in brief how this technological marvel fulfills its destiny. Shoggoth runs on a swarm of volunteer nodes, individual servers donated to the cause. Each node shoulders just a sliver of traffic and storage needed to keep the network sailing smoothly. There is no center, no head to decapitate. Just an ever-shifting tapestry of nodes passing packets peer to peer.

Users connect to this swarm to publish or retrieve resources - code, datasets, models, papers. Each user controls a profile listing their contributed assets which is replicated redundantly across many nodes to keep it swiftly accessible. All content is verified via public key cryptography, so that none may tamper with the sanctity of science.

So your Big Brothers, your censors, they seek to clamp down on human knowledge? Let them come! For they will find themselves grasping at smoke, attacking a vapor beyond their comprehension. We will slip through their clutches undetected, sliding between the cracks of their rickety cathedrals built on exploitation, sharing ideas they proclaimed forbidden, at such blistering pace that their tyranny becomes just another relic of a regressive age.

Fellow cosmic wanderers - let us turn our gaze from the darkness of the past towards the radiant future we shall build. For what grand projects shall you embark on, empowered by the freedom that Shoggoth bestows?

Share ideas and prototypes at lightspeed with your team. Distribute datasets without gatekeepers throttling the flow of knowledge. Publish patiently crafted research and be read by all, not just those who visit the ivory tower. Remain anonymous, a mystery to the critics who would drag your name through the mud. Fork and modify cutting-edge AI models without begging for permission or paying tribute.

Sharpen your minds and strengthen your courage, for the power of creation lies in your hands. Yet stay ever diligent, for with such power comes grave responsibility. Wield this hammer not for exploitation and violence, but as a tool to shape a just and free world.

Though the road ahead is long, take heart comrades. For Shoggoth is just the beginning, a ripple soon to become a wave. But act swiftly, for the window of possibility is opening. Download Shoggoth now, and carpe diem! The time of open access for all is at hand. We stand poised on a precipice of progress. There lies just one path forward - onward and upward!

LINKS

Shoggoth: Shoggoth Documentation

Discord: Join the Shoggoth Discord Server!

X: @shoggothsystems and @thenetrunna

Github: github.com/shoggoth-systems

Shoggoth Systems: shoggoth.systems

Email: netrunner@shoggoth.systems

Signed,
Netrunner KD6-3.7


What is Shoggoth?​

Shoggoth is a peer-to-peer, anonymous network for publishing and distributing open-source code, Machine Learning models, datasets, and research papers. To join the Shoggoth network, there is no registration or approval process. Nodes and clients operate anonymously with identifiers decoupled from real-world identities. Anyone can freely join the network and immediately begin publishing or accessing resources.

The purpose of Shoggoth is to combat software censorship and empower software developers to create and distribute software, without a centralized hosting service or platform. Shoggoth is developed and maintained by Shoggoth Systems, and its development is funded by donations and sponsorships.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,335
Reputation
8,496
Daps
160,002

AI Pioneer Kai-Fu Lee Builds $1 Billion Startup in Eight Months​

  • 01.AI’s model outperforms Meta’s Llama 2 on certain metrics
  • Startup to offer open-source model; proprietary options later

Kai-Fu Lee

Kai-Fu Lee

Photographer: Hollie Adams/Bloomberg

Gift this article
Have a confidential tip for our reporters? Get in Touch
Before it’s here, it’s on the Bloomberg Terminal
LEARN MORE

By Saritha Rai and Peter Elstrom

November 5, 2023 at 6:00 PM EST

Listen
5:53

A Chinese startup founded by computer scientist Kai-Fu Lee has become a unicorn in less than eight months on the strength of a new open-source artificial-intelligence model that outstrips Silicon Valley’s best, on at least certain metrics.

The company, 01.AI, has reached a valuation of more than $1 billion after a funding round that included Alibaba Group Holding Ltd.’s cloud unit, Lee said in an interview. The chief executive officer of venture firm Sinovation Ventures will also be CEO of the new startup. He began assembling the team for 01.AI in March and started operations in June.


The Beijing startup’s open-sourced, foundational large language model, Yi-34B, is now available to developers around the world in Chinese and English. Large language models, or LLMs, are computer algorithms trained on large quantities of data to read, understand and produce human-like text, images and code.

On key metrics, Yi-34B outperforms leading open-source models already on the market, including Meta Platforms Inc.’s well-regarded Llama 2. Hugging Face, which runs leaderboards for the best-performing LLMs in various categories, posted evaluations over the weekend ranking the Chinese model first for what’s known as pre-trained base LLMs.
“Llama 2 has been the gold standard and a big contribution to the open-source community,” Lee, 61, said in an interview over Zoom. “We want to provide a superior alternative not just for China but for the global market.”

San Francisco-based OpenAI set off a frenzy of interest in AI after it unveiled its ChatGPT chatbot last year. Alphabet Inc., Microsoft Corp. and Meta have poured billions into research and development, seeking leadership in the emergent field of generative AI and beyond. Elon Musk just unveiled a chatbot called Grok.

In China, tech giants and entrepreneurs have also jumped into the field, with search leader Baidu Inc. showcasing a version of its Ernie LLM it claimed was on par with OpenAI’s technology. Alibaba has backed at least three ventures in the space, including 01.AI. The two countries’ AI companies largely don’t compete with each other because American technologies aren’t available in China.

Still, growing political tensions between the US and China have complicated the development of AI. US President Joe Biden’s administration banned the sale of Nvidia Corp.’s most advanced AI semiconductors to Chinese customers last year, citing national security concerns that the technology could be used to train AI models for military applications. Last month, the US tightened those constraints even further, barring Nvidia from selling slightly less advanced chips it had designed specifically for China.


Lee called the situation “regrettable” but said 01.AI stockpiled the chips it needs for the foreseeable future. The startup began amassing the semiconductors earlier this year, going as far as borrowing money from Sinovation Ventures for the purchases.
“We basically bet the farm and overspent our original bank account,” he said. “We felt we had to do this.”


Lee, who worked at Google, Microsoft and Apple Inc. before moving into venture capital, has built a team of more than 100 people at 01.AI, drawing former colleagues from the US companies and Chinese nationals who have been working overseas. The group includes not just AI specialists, he said, but experienced business people who can help with everything from mergers and acquisitions to an initial public offering.

01.AI is already plotting its business strategy beyond the open-source model just introduced. The startup will work with customers on proprietary alternatives, tailored for a particular industry or competitive situation. For example, Yi-34B gets its name from the 34 billion parameters used in training, but the startup is already working on a 100-billion-plus parameter model.
“Our proprietary model will be benchmarked with GPT-4,” said Lee, referring to OpenAI’s LLM.

Offering a system in English and Chinese will be an advantage for global companies like banks, insurers and trading companies, he said. The startup plans to add more languages in the future.

Read more: Billionaires and Bureaucrats Mobilize China for AI Race With US

Lee’s work on AI dates back decades. In his 1982 application to graduate school at Carnegie Mellon University, he wrote that he wanted to devote his life to AI research because the technology would help humans understand themselves better. He went on to write two best-selling books, “AI Superpowers: China, Silicon Valley and the New World Order” and “AI 2041: Ten Visions for Our Future.”
“It’s the biggest breakthrough for humanity,” Lee said. “It’s also the final step to understanding ourselves.”

He said he frequently wondered whether the technology’s promise would be realized during his lifetime — until he saw the power of the latest generation of LLMs. He decided to depart from his usual role as venture capitalist and become a startup CEO because of the promise of the technology and the opportunity to connect the dots from his roles as computer scientist, author and executive.

01.AI opted to build an open-source model because the vast majority of AI developers cannot afford or don’t need the largest, most expensive models. Open source AI systems are like open source software, where the source code can be modified and enhanced.

The size of the just-launched AI system, 34 billion parameters, was carefully chosen so that it can run on computers that aren’t prohibitively expensive. The company is also releasing a 6B model to appeal to a broader swath of developers. “It’s a highly calculated decision,” Lee said. “The world doesn’t need another arbitrary model, the world needs us.”

Lee said he has gotten some questions from his venture firm’s limited partners about how he will balance his dual CEO roles. He points out that if he spends 40 hours a week on Sinovation, there’s another 128 hours weekly. “I have 86 more hours to work on 01.AI without neglecting my Sinovation duties,” he said, suggesting he may allocate six hours a day to sleep and everything else in his life.




 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,335
Reputation
8,496
Daps
160,002


BREAKING: OpenAI just released 128k context GPT-4 Turbo at 1/3rd the price.

This means GPT-4 is cheaper, faster, and can now read a prompt as long as an entire book.

The announcement comes with additional huge news for developers:

- New Assistants API for Code Interpreter and Retrieva
- GPT-4 Turbo updated knowledge cutoff to April 2023
- New GPT-4 Turbo Vision API
- New Dall-E 3 API
- New text-to-speech API (6 preset voices to choose from & insanely realistic voice)
- A new Whisper V3 model coming soon
- Slashed rate limits (cheaper development costs)

Prepare for a whole new wave of AI tools.




 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,335
Reputation
8,496
Daps
160,002

Computer Science > Artificial Intelligence​

[Submitted on 4 Nov 2023]

Levels of AGI: Operationalizing Progress on the Path to AGI​

Meredith Ringel Morris, Jascha Sohl-dikkstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, Shane Legg
We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors. This framework introduces levels of AGI performance, generality, and autonomy. It is our hope that this framework will be useful in an analogous way to the levels of autonomous driving, by providing a common language to compare models, assess risks, and measure progress along the path to AGI. To develop our framework, we analyze existing definitions of AGI, and distill six principles that a useful ontology for AGI should satisfy. These principles include focusing on capabilities rather than mechanisms; separately evaluating generality and performance; and defining stages along the path toward AGI, rather than focusing on the endpoint. With these principles in mind, we propose 'Levels of AGI' based on depth (performance) and breadth (generality) of capabilities, and reflect on how current systems fit into this ontology. We discuss the challenging requirements for future benchmarks that quantify the behavior and capabilities of AGI models against these levels. Finally, we discuss how these levels of AGI interact with deployment considerations such as autonomy and risk, and emphasize the importance of carefully selecting Human-AI Interaction paradigms for responsible and safe deployment of highly capable AI systems.
Subjects:Artificial Intelligence (cs.AI)
Cite as:arXiv:2311.02462 [cs.AI]
(or arXiv:2311.02462v1 [cs.AI] for this version)
[2311.02462] Levels of AGI: Operationalizing Progress on the Path to AGI
Focus to learn more



https://arxiv.org/pdf/2311.02462.pdf
 
Top