Fresh

SOHH Vet
Joined
May 2, 2013
Messages
8,730
Reputation
4,935
Daps
20,994
I like threads about AI, but I think AI is going to be a threat to humans in the near future

I'm not trying to exaggerate, but every movie that has to do with artificial intelligence shows the AI overthrowing humans and out smarting us one way or the other
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,606
Reputation
8,519
Daps
160,562

Swarovski’s $4,800 Smart Binoculars Use AI to Identify What Species You’re Seeing​

The Optik AX Visio binoculars can turn an amateur bird watcher into a seasoned pro, for the price of a used car​

Published 01/11/24 11:50 AM ET|Updated 01/11/24 02:22 PM ET

Andrew Liszewski

image

Not sure what you’re looking at? Swarovski’s new binoculars will tell you.Swarovski Optik

Part of the excitement of bird watching is compiling a long list of all the species you've spotted, but that can be a challenge for amateurs who can't identify what they're looking at. Now Swarovski Optik's AX Visio binoculars handle that part automatically using AI-powered image recognition.

Although the basic design of binoculars hasn't changed in almost 200 years, there have been attempts to upgrade their capabilities, including the addition of camera sensors to record images and videos of what users are seeing. A Canadian company even went so far as to redesign binoculars with an LCD screen, replacing the eye pieces and 4K video capture capabilities, but birders have remained loyal to more traditional binocular designs that offer more zoom capabilities and better image quality.

Swarovski Optik (a division of the company that makes optical instruments, not glittery jewelry) hopes its new AX Visio smart binoculars strike a perfect balance between optical performance and advanced features. Featuring 8X magnification and 32-millimeter main lenses that let lots of light in, the AX Visio also incorporate a 13-megapixel camera sensor that can be used to capture digital images and hi-def 1080P videos which can be shared to an accompanying mobile app.



We've seen similar functionality in binoculars before, but the AX Visio also use that image sensor for another clever feature. Once a user manually focuses on a bird or animal they can press a button which will pass the image through the Merlin Bird ID database, which was originally developed by Cornell University's ornithology lab. After a few seconds, users will see the name of the bird displayed.


image

A mode dial allows users to specify what type of wildlife they're trying to identify, birds or animals.Swarovski Optik

The AX Visio binoculars aren't limited to just birds. A dial on the front allow its image recognition capabilities to be switched to animals. Over 9,000 different birds and other wildlife can be recognized by the smart binoculars, although since it relies on a pre-trained database, the AX Visio may not be able to confirm a bigfoot sighting.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,606
Reputation
8,519
Daps
160,562

WILL KNIGHT

BUSINESS

JAN 11, 2024 12:00 PM


Toyota's Robots Are Learning to Do Housework—By Copying Humans​

Carmaker Toyota is developing robots capable of learning to do household chores by observing how humans take on the tasks. The project is an example of robotics getting a boost from generative AI.

A person in a lab teleoperating robotic arms that are holding a small broom and dustpan while another person watches...

Will Knight at the Toyota Research Institute in Cambridge, Massachusetts.COURTESY OF TOYOTA RESEARCH INSTITUTE

As someone who quite enjoys the Zen of tidying up, I was only too happy to grab a dustpan and brush and sweep up some beans spilled on a tabletop while visiting the Toyota Research Lab in Cambridge, Massachusetts last year. The chore was more challenging than usual because I had to do it using a teleoperated pair of robotic arms with two-fingered pincers for hands.


https://dp8hsntg6do36.cloudfront.ne...2cmanifest-ios.m3u8?videoIndex=0&requester=oo
Courtesy of Toyota Research Institute

As I sat before the table, using a pair of controllers like bike handles with extra buttons and levers, I could feel the sensation of grabbing solid items, and also sense their heft as I lifted them, but it still took some getting used to.

After several minutes tidying, I continued my tour of the lab and forgot about my brief stint as a teacher of robots. A few days later, Toyota sent me a video of the robot I’d operated sweeping up a similar mess on its own, using what it had learned from my demonstrations combined with a few more demos and several more hours of practice sweeping inside a simulated world.


https://dp8hsntg6do36.cloudfront.ne...7c1-49debdcfda85file-1422k-128-48000-768.m3u8
Autonomous sweeping behavior. Courtesy of Toyota Research Institute

Most robots—and especially those doing valuable labor in warehouses or factories—can only follow preprogrammed routines that require technical expertise to plan out. This makes them very precise and reliable but wholly unsuited to handling work that requires adaptation, improvisation, and flexibility—like sweeping or most other chores in the home. Having robots learn to do things for themselves has proven challenging because of the complexity and variability of the physical world and human environments, and the difficulty of obtaining enough training data to teach them to cope with all eventualities.

There are signs that this could be changing. The dramatic improvements we’ve seen in AI chatbots over the past year or so have prompted many roboticists to wonder if similar leaps might be attainable in their own field. The algorithms that have given us impressive chatbots and image generators are also already helping robots learn more efficiently.

The sweeping robot I trained uses a machine-learning system called a diffusion policy, similar to the ones that power some AI image generators, to come up with the right action to take next in a fraction of a second, based on the many possibilities and multiple sources of data. The technique was developed by Toyota in collaboration with researchers led by Shuran Song, a professor at Columbia University who now leads a robot lab at Stanford.

Toyota is trying to combine that approach with the kind of language models that underpin ChatGPT and its rivals. The goal is to make it possible to have robots learn how to perform tasks by watching videos, potentially turning resources like YouTube into powerful robot training resources. Presumably they will be shown clips of people doing sensible things, not the dubious or dangerous stunts often found on social media.

“If you've never touched anything in the real world, it's hard to get that understanding from just watching YouTube videos,” Russ Tedrake, vice president of Robotics Research at Toyota Research Institute and a professor at MIT, says. The hope, Tedrake says, is that some basic understanding of the physical world combined with data generated in simulation, will enable robots to learn physical actions from watching YouTube clips. The diffusion approach “is able to absorb the data in a much more scalable way,” he says.


Toyota announced its Cambridge robotics institute back in 2015 along with a second institute and headquarters in Palo Alto, California. In its home country of Japan—as in the US and other rich nations—the population is aging fast. The company hopes to build robots that can help people continue living independent lives as they age.

The lab in Cambridge has dozens of robots working away on chores including peeling vegetables, using hand mixers, preparing snacks, and flipping pancakes. Language models are proving helpful because they contain information about the physical world, helping the robots make sense of the objects in front of them and how they can be used.

It’s important to note that despite many demos slick enough to impress a casual visitor, the robots still make lots of errors. Like earlier versions of the model behind ChatGPT, they can veer between seeming humanlike and making strange errors. I saw one robot effortlessly operating a manual hand mixer and another struggling to grasp a bottletop.

Toyota is not the only big tech company hoping to use language models to advance robotics research. Last week, for example, a team at Google DeepMind recently revealed Auto-R, software that uses a large language model to help robots determine the tasks that they could realistically—and safely—do in the real world.

Progress is also being made on the hardware needed to advance robot learning. Last week a group at Stanford University led by Chelsea Finn posted videos of a low-cost mobile teleoperated robotics system called ALOHA. They say the fact that it is mobile allows the robot to tackle a wider range of tasks, giving it a wider range of experiences to learn from than a system locked in one place.

And while it’s easy to be dazzled by robot demo videos, the ALOHA team was good enough to post a highlight reel of failure modes showing the robot fumbling, breaking, and spilling things. Hopefully another robot will learn how to clean up after it.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,606
Reputation
8,519
Daps
160,562



 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,606
Reputation
8,519
Daps
160,562
They blocked everything, cut pictures generated from 4 to 1, and the service itself got dumber in responding to your prompts.

I canceled my paid subscription 2 weeks ago.

I read the API response is less hamstrung than the web version :ld:
 

jadillac

Veteran
Joined
Apr 30, 2012
Messages
54,855
Reputation
8,726
Daps
167,996
I read the API response is less hamstrung than the web version :ld:

Do you pay for it, or use gpt 4?

It sucks now breh.

You can't even instruct it to do basic stuff that it used to do flawlessly on the FREE version.

The only thing it does exceptionally well is translate languages. It's way more accurate than Google Translate. At least for Spanish
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,606
Reputation
8,519
Daps
160,562


TrustLLM: Trustworthiness in Large Language Models

paper page: Paper page - TrustLLM: Trustworthiness in Large Language Models

Large language models (LLMs), exemplified by ChatGPT, have gained considerable attention for their excellent natural language processing capabilities. Nonetheless, these LLMs present many challenges, particularly in the realm of trustworthiness. Therefore, ensuring the trustworthiness of LLMs emerges as an important topic. This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. Our findings firstly show that in general trustworthiness and utility (i.e., functional effectiveness) are positively related. Secondly, our observations reveal that proprietary LLMs generally outperform most open-source counterparts in terms of trustworthiness, raising concerns about the potential risks of widely accessible open-source LLMs. However, a few open-source LLMs come very close to proprietary ones. Thirdly, it is important to note that some LLMs may be overly calibrated towards exhibiting trustworthiness, to the extent that they compromise their utility by mistakenly treating benign prompts as harmful and consequently not responding. Finally, we emphasize the importance of ensuring transparency not only in the models themselves but also in the technologies that underpin trustworthiness. Knowing the specific trustworthy technologies that have been employed is crucial for analyzing their effectiveness.
 
Top