bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,748

OpenAI suspends bot developer for presidential hopeful Dean Phillips​

It’s the ChatGPT maker’s first known action against the use of its technology in a political campaign​

By Elizabeth Dwoskin

Updated January 22, 2024 at 6:11 p.m. EST|Published January 20, 2024 at 5:33 p.m. EST


imrs.php

Rep. Dean Phillips (D-Minn.), right, on Thursday in Hanover, N.H. (Matt McClain/The Washington Post)


The artificial intelligence company OpenAI banned the developer of a bot mimicking long shot Democratic presidential hopeful Rep. Dean Phillips — the first action that the maker of ChatGPT has taken in response to what it sees as a misuse of its AI tools in a political campaign.

Dean. Bot was the brainchild of Silicon Valley entrepreneurs Matt Krisiloff and Jed Somers, who had started a super PAC supporting Phillips (Minn.) ahead of the New Hampshire primary on Tuesday. The PAC had received $1 million from hedge fund manager Bill Ackman, the billionaire activist who led the charge to oust Harvard University president Claudine Gay.

The bot was powered by OpenAI’s ChatGPT conversational software, which the company has made available to outside developers.

The super PAC, called We Deserve Better, had contracted with AI start-up Delphi to build the bot. OpenAI suspended Delphi’s account late Friday in response to a Washington Post story on the SuperPAC, noting that OpenAI’s rules ban the use of its technology in political campaigns. Delphi took down Dean. Bot after the account suspension.

“Anyone who builds with our tools must follow our usage policies,” OpenAI spokeswoman Lindsey Held said in a statement. “We recently removed a developer account that was knowingly violating our API usage policies which disallow political campaigning, or impersonating an individual without consent.”

Delphi co-founder Dara Ladjevardian told The Post on Monday that the company “incorrectly” believed that OpenAI’s terms of service would let “a political action committee that supports Dean Phillips create a clone of him using our platform.”

The start-up “did not understand that … [the super PAC] may not and did not coordinate with or seek permission from candidates they are supporting,” Ladjevardian said, adding that he had refunded the super PAC and updated the company’s terms of service to ban engagement with political campaigns.

Dean. Bot, which could converse with voters in real-time through a website, was an early use of an emerging technology that researchers have said could cause significant harm to elections.

The bot included a disclaimer explaining that it was an AI tool and not the real Dean Phillips, and required that voters consent to its use. But researchers told The Post that such technologies could lull people into accepting a dangerous tool, even when disclaimers are in place.

Proponents, including We Deserve Better, argue that the bots, when used appropriately, can educate voters by giving them an entertaining way to learn more about a candidate.

Without disclaimers, experts have said, the technologies could enable mass robocalls to voters who think they’re talking to actual candidates or supporters. AI systems can also produce disinformation in ads or content, such as fake websites, at scale.


OpenAI won't let politicians use its tech for campaigning

After The Post asked We Deserve Better about OpenAI’s prohibitions on Thursday, Krisiloff said he had asked Delphi to remove ChatGPT from the bot and instead rely on open source technologies that also offer conversational capabilities that had gone into the bot’s design.

The bot remained available to the public without ChatGPT until late Friday, when Delphi took the bot down in response to the suspension, Krisiloff said.

Krisiloff did not have further comment. Delphi did not immediately respond to a request for comment.

Krisiloff is a former chief of staff to OpenAI CEO Sam Altman. Altman has met with Phillips but has no involvement in the super PAC, Krisiloff said.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,748

MLCommons wants to create AI benchmarks for laptops, desktops and workstations​

Kyle Wiggers @kyle_l_wiggers / 12:16 PM EST•January 24, 2024

Robot humanoid use laptop and sit at table for global network connection

Image Credits: NanoStockk / Getty Images

As AI increasingly moves from the cloud to on-device, how, exactly, is one supposed to know whether such and such new laptop will run a GenAI-powered app faster than rival off-the-shelf laptops — or desktops or all-in-ones, for that matter? Knowing could mean the difference between waiting a few seconds for an image to generate versus a few minutes — and as they say, time is money.

MLCommons, the industry group behind a number of AI-related hardware benchmarking standards, wants to make it easier to comparison shop with the launch of performance benchmarks targeted at “client systems” — i.e. consumer PCs.

Today, MLCommons announced the formation of a new working group, MLPerf Client, whose goal is establishing AI benchmarks for desktops, laptops and workstations running Windows, Linux and other operating systems. MLCommons promises that the benchmarks will be “scenario-driven,” focusing on real end-user use cases and “grounded in feedback from the community.”

To that end, MLPerf Client’s first benchmark will focus on text-generating models, specifically Meta’s Llama 2, which MLCommons executive director David Kanter notes has already been incorporated into MLCommons’ other benchmarking suites for datacenter hardware. Meta’s also done extensive work on Llama 2 with Qualcomm and Microsoft to optimize Llama 2 for Windows — much to the benefit of Windows-running devices.

“The time is ripe to bring MLPerf to client systems, as AI is becoming an expected part of computing everywhere,” Kanter said in a press release. “We look forward to teaming up with our members to bring the excellence of MLPerf into client systems and drive new capabilities for the broader community.”

Members of the MLPerf Client working group include AMD, Arm, Asus, Dell, Intel, Lenovo, Microsoft, Nvidia and Qualcomm — but notably not Apple.

Apple isn’t a member of the MLCommons, either, and a Microsoft engineering director (Yannis Minadakis) co-chairs the MLPerf Client group — which makes the company’s absence not entirely surprising. The disappointing outcome, however, is that whatever AI benchmarks MLPerf Client conjures up won’t be tested across Apple devices — at least not in the near-ish term.

Still, this writer’s curious to see what sort of benchmarks and tooling emerge from MLPerf Client, macOS-supporting or no. Assuming GenAI is here to stay — and there’s no indication that the bubble is about to burst anytime soon — I wouldn’t be surprised to see these types of metrics play an increasingly role in device buying decisions.

In my best-case scenario, the MLPerf Client benchmarks are akin to the many PC build comparison tools online, giving an indication as to what AI performance one can expect from a particular machine. Perhaps they’ll expand to cover phones and tablets in the future, even, given Qualcomm’s and Arm’s participation (both are heavily invested in the mobile device ecosystem). It’s clearly early days — but here’s hoping.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,748

Etsy launches ‘Gift Mode,’ a new AI-powered feature that generates 200+ gift guides

Lauren Forristal@laurenforristal / 7:00 AM EST•January 24, 2024


Etsy Gift Mode
Etsy-Gift-Mode.png

Image Credits: Etsy

E-commerce site Etsy today launched “Gift Mode,” a new AI-powered feature to match you with tailored gift ideas based on specific preferences.

Gift Mode is essentially an online quiz that asks about who you’re shopping for (sibling, parent, child), the occasion (birthday, anniversary, get well), and the recipient’s interests. At launch, the feature has 15 interests to choose from, including crafting, fashion, sports, video games, pets, and more. It then generates a series of gift guides inspired by your choices, pulling options from the over 100 million items listed on the platform. The gift guides are centered around more than 200 different personas. For instance, “The Music Lover,” “The Video Gamer,” “The Adventurer” and “The Pet Parent.” Over time, the company plans to add new interests and personas in response to “emerging trends,” Tim Holley, Etsy’s VP of Product, told TechCrunch.

Additionally, Etsy introduced a “gift teaser” option where you can send the recipient a sneak peek of the item via email in case the gift won’t arrive in time for the event. You can also include a gift note and tracking information.

Online retailer UncommonGoods did something similar in 2017 when it released “Sunny,” its AI-powered gift guide feature that suggests products based on inputs like who you’re shopping for, their age group, and hobbies they’re interested in. There’s also a text box to add more specific preferences.


etsy-app.png

Image Credits: Etsy

Etsy hopes Gift Mode will relieve the stress that comes with selecting the perfect present. According to Etsy’s latest survey, 71% of respondents felt anxious when shopping for gifts within the past year.

“We built Gift Mode on the insight that while plenty of brands sell items that make nice gifts, there isn’t a dedicated and scaled shopping experience designed for the gifter that addresses these pain points while helping shoppers find truly thoughtful, meaningful gifts. Now, Etsy is building for the gifting experience year-round, to become the destination for gifting. And we believe these products will help people discover and more easily purchase gifts from independent sellers,” Holley added.

Holley explains that the new feature leverages a combination of machine learning, human curation, and OpenAI’s GPT-4.

“OpenAI’s GPT-4 enabled us to expand to such a wide range of gifting missions. We then built and scaled the product on Etsy’s existing advanced machine learning training and inference stack – which powers search, recommendations, and more on Etsy. For this use case, we included specific adjustments to make sure listings included are ‘gifty’ and add preference to listings that are predicted to be high quality and sold by sellers who offer excellent customer service,” Holley said.

AI has powered the platform’s search features for years and the company recently increased its focus in the area, particularly generative AI.

“We’ve been at the cutting edge of search technology for the past several years and while we use large language models today, we couldn’t be more excited about the potential of new or large language models and generative AI to further accelerate the transformation of Etsy’s user experience,” Etsy CEO Josh Silverman told investors during an earnings call in May 2023.

Etsy has released other gift-related offerings in the past, including wedding and baby registries. The company says it will continue its investment in the gifting space and enhance Gift Mode’s capabilities over time.

The platform recently announced a new optional program, “Share & Save,” where sellers share links to their Etsy store, and after making a sale, the platform only takes a 4% transaction fee instead of the usual 6.5%. In 2022, the company received backlash from sellers when it increased the fee to 6.5% — up from 5% — which resulted in a strike.

Etsy touts more than 90 million customers and over six million sellers.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,748

Google announces new AI-powered features for education​

Ivan Mehta @indianidle / 8:52 AM EST•January 24, 2024


Google logo

Image Credits: Sean Gallup / Getty Images

Google announced a bunch of new features for classroom management, accessibility, and AI-powered features for creating questions and lesson plans as part of the Bett ed-tech event in the UK.



AI-powered features​

Google already allows teachers to add questions to a YouTube video as part of its Classroom assignment. The company said that soon, AI will help with suggestions for questions at different timestamps. The company first announced this feature in June 2023. However, at that time, teachers had to apply to get this feature.

Interactive-questions-for-YouTube-AI-suggested-questions.gif

The company added that the Practice sets feature, which uses AI to create answers and general hints, is now available in over 50 languages. Plus, educators can turn a Google Form into a practice set.

Additionally, Google is introducing a new Resources tab to manage practice sets and interactive questions asked during a video.

Classroom-Resource-Tab.png

Image Credits: Google

Additionally, the company mentioned that its generative AI tool for Google Workspace, Duet AI, can assist teachers in coming up with a lesson plan.



Class management​

Google said that teachers will now be able to form different groups in the Classroom later this year. They can also assign different assignments to different groups.

Plus, they can use the speaker spotlight feature in Slides to create a lesson with narration along with the slide deck.

Record-presentations-in-Slides.gif

Image Credits: Google

Now, Educators can also easily collect signatures for approvals or contracts through eSigature facility available in the Google Workspace.

The company is also updating Classroom analytics so educators can look at stats like assignment completion and trends for grades.



Accessibility​

Google is adding the ability to get text from PDFs for screen readers on ChromeOS. Plus, with the Chome browser users can highlight and read text aloud.

PDF-OCR.jpeg

Image Credits: Google

The company is also adding features to Google Meet such as closed captions in 30 languages and the ability to pin multiple hosts.



Chromebooks​

The company said that it is exhibiting 15 new Chromebooks at Bett, which will launch this year. Plus, Google is soon updating Chromebooks so that students can look at their upcoming assignments directly from the home screen.

Glanceables-Classroom-V2.gif

Image Credits: Google
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,748

There’s an AI ‘brain drain’ in academia​

Kyle Wiggers @kyle_l_wiggers / 10:00 AM EST•January 24, 2024

Digitally generated image, perfectly usable for all kinds of topics related to digital innovations, AI, data processing, network security or technology and computer science in general.

Image Credits: Getty Images

As one might expect, lots of students who graduate with a doctorate in an AI-related field end up joining an AI company, whether a startup or Big Tech giant.


According to Stanford’s 2021 Artificial Intelligence Index Report, the number of new AI PhD graduates in North America entering the AI industry post-graduation grew from 44.4% in 2010 to around 48% in 2019. By contrast, the share of new AI PhDs entering academia dropped from 42.1% in 2010 to 23.7% in 2019.

Private industry’s willingness to pay top dollar for AI talent is likely a contributing factor.

Jobs from the biggest AI ventures, like OpenAI and Anthropic, list eye-popping salaries ranging from $700,000 to $900,000 for new researchers, per data from salary negotiation service Rora. Google has reportedly gone so far as to offer large grants of restricted stock to incentivize leading data scientists.

While AI graduates are no doubt welcoming the trend — who wouldn’t kill for a starting salary that high? — it’s having an alarming impact on academia.

A 2019 survey co-authored by researchers at the Hebrew University of Jerusalem and Cheung Kong Graduate School of Business in Beijing found that close to 100 AI faculty members left North American universities for industry jobs between 2018 to 2019 — a outsized cohort in the context of a specialized computer science field. Between 2004 and 2019, Carnegie Mellon alone saw 16 AI faculty members depart, and the Georgia Institute of Technology and University of Washington lost roughly a dozen each, the study found.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,748

Feds kick off National AI Research Resource with pilot program live today​

Devin Coldewey @techcrunch / 11:01 AM EST•January 24, 2024


Black woman in front of a technology swirl.

Image Credits: metamorworks / Getty Images

A year to the day after it was proposed, the National AI Research Resource is coming online — at least in pilot form — as a coalition of U.S. agencies and private partners start to apply billions in federal funding towards public-access tools for aspiring AI scientists and engineers.

NAIRR is the Biden administration’s answer to the sudden rise of AI in the global tech scene, and the concentration of its resources and expertise among a relatively small group of tech giants and privately-funded startups. In an attempt to democratize the tech a bit and keep the U.S. competitive with its rivals abroad, the feds decided to dedicate some money to making a variety of resources available to any qualified researcher.

The National Science Foundation, Department of Energy, NASA, NOAA, DARPA and others are all partners in the effort, both providing resources (like datasets and consultation) and working with applicants in their areas of expertise. And more than two dozen major tech companies are also contributing in some way. The whole thing has an $800 million per-year budget for the next three years, subject to congressional approval of course.

In a panoply of statements, executives from OpenAI, Anthropic, Nvidia, Meta, Amazon, Microsoft commit a variety of resources, expertise, free access, and so on to the NAIRR effort.

The resources that will be made available haven’t been listed anywhere. Instead, the overall organization will accept applications and proposals, which will be evaluated and assigned resources. Think of it more like a grant-making process than a free supercomputer.

As the NSF’s Katie Antypas put it, NAIRR is “will provide the research community access to the computing the data, the models, the software, and the training resources that are necessary to advance the AI ecosystem. The NAIRR pilot is really needed because the resources needed to even begin participating in the ecosystem have become increasingly concentrated and inaccessible to many, many communities that are really essential for developing a healthy and responsible AI ecosystem. And so the pilot is the first step to bridging this gap.”

She gave three examples: a researcher looking at large AI models who needs large scale computing resources and has no way of accessing it; a teacher who wants to let kids do AI-related homework (like training custom models) but needs resources like virtual notebooks and compute time; and someone looking at predicting climate and weather events, who can access NASA and NOAA datasets and combine them with hosted models.

For the two-year pilot period, there will be four focus areas:
  • NAIRR Open, the most general category, involving “access to diverse AI resources,” presumably for research and projects that don’t fit the narrow categories following.
  • NAIRR Secure, focused on AI applications that need privacy and security, such as medical and critical infrastructure. This part is led by the NIH and Energy, unsurprisingly.
  • NAIRR Software is more focused on tools, platforms, services, and interoperability.
  • NAIRR Classroom is about outreach, education, and training.
It may surprise some that there is no outwardly military research category (considering the presence of DARPA and the DOD in the partner agencies), but remember this is a civilian research effort led by the Executive agencies. Presumably any military research will be siloed, and the military agencies are there to coordinate and delegate, or offer such resources as are appropriate in this case.

The idea is that if someone has a worthwhile idea about how to apply or advance AI in any sector, there should be a domain expert and a check waiting. But it won’t be like a public library, where you walk into your local AI center and someone sets you up with an H100 for a half hour slot. (That said, I wouldn’t be surprised if there were some kind of library-centric outreach program.)

You’ll be able to peruse the list of resources at the NAIRR Pilot page starting today, and although there are no hard numbers yet, the leaders of the project said that only 25-50 proposals will likely be accepted in this initial pilot period, with hundreds more spots opening up in the spring when more systems come online.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,748

Musk’s AI Startup Secures $500 Million Toward $1 Billion Funding Goal​


  • Grok maker aims for valuation of at least $15 billion
  • Investors in X, formerly Twitter, getting priority in round
By Lizette Chapman, Edward Ludlow, and Katie Roof

January 19, 2024 at 7:09 PM EST
Updated on
January 19, 2024 at 9:53 PM EST


Elon Musk’s artificial intelligence company, xAI, has secured $500 million in commitments from investors toward a $1 billion goal, according to people with knowledge of the talks.


The company is discussing a valuation of $15 billion to $20 billion, though terms could still change in the coming weeks, the people said, declining to be named because they were not authorized to speak publicly about the investment. Musk said on X, the network formerly called Twitter, that the report was “fake news.”

Musk launched the startup last year as an alternative to Open AI, which he co-founded and later left over philosophical differences about how to profit from the technology. xAI’s product, a chatbot named Grok, is developed using social media posts on X, which Musk also owns. That allows Grok to access more recent data for its answers than other chatbots.

Read More: Elon Musk’s Grok Represents a Serious Threat to ChatGPT

The two companies’ investors will likely overlap too.

Those who backed Musk’s $44 billion takeover of Twitter include Larry Ellison, Sequoia Capital, Andreessen Horowitz, Fidelity Management & Research Co. and Saudi Prince Alwaleed bin Talal.

Musk said in November that equity investors in X will own 25% of xAI. In practice, that means those investors are invited to invest in xAI at least 25% the amount they invested in X, according to a person with knowledge of the arrangement. If they invested $10 billion in X, they’re invited to invest $2.5 billion or more in xAI, for example.

Musk and investors are expected to finalize terms in the next couple weeks, the people said.

Some parties are evaluating whether they can get computing power in addition to, or in some cases instead of, xAI equity shares, one of the people said. That would be beneficial to venture firms’ portfolio companies, which need to process data intensively in order to build new artificial intelligence products.

— With assistance from Kurt Wagner

(Updates with Musk’s comment in second paragraph.)
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,748


Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model​

17 Jan 2024 · Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, Xinggang Wang · Edit social preview
Recently the state space models (SSMs) with efficient hardware-aware designs, i.e., Mamba, have shown great potential for long sequence modeling. Building efficient and generic vision backbones purely upon SSMs is an appealing direction. However, representing visual data is challenging for SSMs due to the position-sensitivity of visual data and the requirement of global context for visual understanding. In this paper, we show that the reliance of visual representation learning on self-attention is not necessary and propose a new generic vision backbone with bidirectional Mamba blocks (Vim), which marks the image sequences with position embeddings and compresses the visual representation with bidirectional state space models. On ImageNet classification, COCO object detection, and ADE20k semantic segmentation tasks, Vim achieves higher performance compared to well-established vision transformers like DeiT, while also demonstrating significantly improved computation & memory efficiency. For example, Vim is 2.8× faster than DeiT and saves 86.8% GPU memory when performing batch inference to extract features on images with a resolution of 1248×1248. The results demonstrate that Vim is capable of overcoming the computation & memory constraints on performing Transformer-style understanding for high-resolution images and it has great potential to become the next-generation backbone for vision foundation models. Code is available at GitHub - hustvl/Vim: Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model.

 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,748

High Can See Clearly Now: AI-Powered NVIDIA RTX Video HDR Transforms Standard Video Into Stunning High Dynamic Range​

RTX Remix open beta adds full ray tracing, DLSS, Reflex and generative AI tools for modders; the new GeForce RTX 4070 Ti SUPER is available now; the January Studio Driver is released; and 3D artist Vishal Ranga creates vibrant scenes using AI this week ‘In the NVIDIA Studio.’

January 24, 2024
by GERARDO DELGADO


Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

RTX Video HDR — first announced at CES — is now available for download through the January Studio Driver. It uses AI to transform standard dynamic range video playing in internet browsers into stunning high dynamic range (HDR) on HDR10 displays.

PC game modders now have a powerful new set of tools to use with the release of the NVIDIA RTX Remix open beta.

It features full ray tracing, NVIDIA DLSS, NVIDIA Reflex, modern physically based rendering assets and generative AI texture tools so modders can remaster games more efficiently than ever.

Pick up the new GeForce RTX 4070 Ti SUPER available from custom board partners in stock-clocked and factory-overclocked configurations to enhance creating, gaming and AI tasks.




Get creative superpowers with the GeForce RTX 4070 Ti SUPER available now.

Part of the 40 SUPER Series announced at CES, it’s equipped with more CUDA cores than the RTX 4070, a frame buffer increased to 16GB, and a 256-bit bus — perfect for video editing and rendering large 3D scenes. It runs up to 1.6x faster than the RTX 3070 Ti and 2.5x faster with DLSS 3 in the most graphics-intensive games.

And this week’s featured In the NVIDIA Studio technical artist Vishal Ranga shares his vivid 3D scene Disowned — powered by NVIDIA RTX and Unreal Engine with DLSS.



RTX Video HDR Delivers Dazzling Detail


Using the power of Tensor Cores on GeForce RTX GPUs, RTX Video HDR allows gamers and creators to maximize their HDR panel’s ability to display vivid, dynamic colors, preserving intricate details that may be inadvertently lost due to video compression.



RTX Video HDR and RTX Video Super Resolution can be used together to produce the clearest livestreamed video anywhere, anytime. These features work on Chromium-based browsers such as Google Chrome or Microsoft Edge.

To enable RTX Video HDR:


  1. Download and install the January Studio Driver.
  2. Ensure Windows HDR features are enabled by navigating to System > Display > HDR.
  3. Open the NVIDIA Control Panel and navigate to Adjust video image settings > RTX Video Enhancement — then enable HDR.


Standard dynamic range video will then automatically convert to HDR, displaying remarkably improved details and sharpness.

RTX Video HDR is among the RTX-powered apps enhancing everyday PC use, productivity, creating and gaming. NVIDIA Broadcast supercharges mics and cams; NVIDIA Canvas turns simple brushstrokes into realistic landscape images; and NVIDIA Omniverse seamlessly connects 3D apps and creative workflows. Explore exclusive Studio tools, including industry-leading NVIDIA Studio Drivers — free for RTX graphics card owners — which support the latest creative app updates, AI-powered features and more.

RTX Video HDR requires an RTX GPU connected to an HDR10-compatible monitor or TV. For additional information, check out the RTX Video FAQ.



Introducing the Remarkable RTX Remix Open Beta


Built on NVIDIA Omniverse, the RTX Remix open beta is available now.





The NVIDIA RTX open beta is out now.

It allows modders to easily capture game assets, automatically enhance materials with generative AI tools, reimagine assets via Omniverse-connected apps and Universal Scene Description (OpenUSD), and quickly create stunning RTX remasters of classic games with full ray tracing and NVIDIA DLSS technology.



RTX Remix has already delivered stunning remasters, such as Portal with RTX and the modder-made Portal: Prelude RTX. Orbifold Studios is now using the technology to develop Half-Life 2 RTX: An RTX Remix Project, a community remaster of one of the highest-rated games of all time. Check out the gameplay trailer, showcasing Orbifold Studios’ latest updates to Ravenholm:



Learn more about the RTX Remix open beta and sign up to gain access.



Leveling Up With RTX


Vishal Ranga has a decade’s worth of experience in the gaming industry, where he pursues level design.

“I’ve loved playing video games since forever, and that curiosity led me to game design,” he said. “A few years later, I found my sweet spot in technical art.”





Ranga specializes in level design.

His stunning scene Disowned was born out of experimentation with Unreal Engine’s new ray-traced global illumination lighting capabilities.

Remarkably, he skipped the concepting process — the entire project was conceived solely from Ranga’s imagination.

Applying the water shader and mocking up the lighting early helped Ranga set up the mood of the scene. He then updated old assets and searched the Unreal Engine store for new ones — what he couldn’t find, like fishing nets and custom flags, he created from scratch.




Ranga meticulously organizes assets.

“I chose a GeForce RTX GPU to use ray-traced dynamic global illumination with RTX cards for natural, more realistic light bounces.” — Vishal Ranga

Ranga’s GeForce RTX graphics card unlocked RTX-accelerated rendering for high-fidelity, interactive visualization of 3D designs during virtual production.

Next, he tackled shader work, blending in moss and muck into models of wood, nets and flags. He also created a volumetric local fog shader to complement the assets as they pass through the fog, adding greater depth to the scene.





Shaders add extraordinary depth and visual detail.

Ranga then polished everything up. He first used a water shader to add realism to reflections, surface moss and subtle waves, then tinkered with global illumination and reflection effects, along with other post-process settings.




Materials come together to deliver realism and higher visual quality.

Ranga used Unreal Engine’s internal high-resolution screenshot feature and sequencer to capture renders. This was achieved by cranking up screen resolution to 200%, resulting in crisper details.

Throughout, DLSS enhanced Ranga’s creative workflow, allowing for smooth scene movement while maintaining immaculate visual quality.

When finished with adjustments, Ranga exported the final scene in no time thanks to his RTX GPU.

Video Player



Ranga encourages budding artists who are excited by the latest creative advances but wondering where to begin to “practice your skills, prioritize the basics.”

“Take the time to practice and really experience the highs and lows of the creation process,” he said. “And don’t forget to maintain good well-being to maximize your potential.”



3D artist Vishal Ranga.

Check out Ranga’s portfolio on ArtStation.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.



 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,748

Google’s Gemini Pro vs OpenAI’s ChatGPT...​

Compare and Share Side-by-Side Prompts with Google’s Gemini Pro vs OpenAI’s ChatGPT.​

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,748

About​

Google Gemini Pro UI (Base on ChatGPT-Next-Web). 一键拥有你自己的跨平台 Gemini 应用。

chat.googlegemini.co/

Gemini Pro Chat​

English / 简体中文
One-Click to get a well-designed cross-platform Gemini web UI, Gemini Pro support, Base From ChatGPT Next Web.
一键免费部署你的跨平台私人 Gemini 应用, 支持Gemini Pro 模型。基于 ChatGPT Next Web.
Web App / Twitter
网页版 / 反馈
Deploy with Vercel
cover

Features​

  • Deploy for free with one-click on Vercel in under 1 minute
  • Google Gemini Pro Support,include Text Input and Text Image Input.
  • Privacy first, all data is stored locally in the browser
  • Markdown support: LaTex, mermaid, code highlight, etc.
  • Responsive design, dark mode and PWA
  • Fast first screen loading speed (~100kb), support streaming response
  • New in v2: create, share and debug your chat tools with prompt templates (mask)
  • Awesome prompts powered by awesome-chatgpt-prompts-zh and awesome-chatgpt-prompts
  • Automatically compresses chat history to support long conversations while also saving your tokens
  • I18n: English, 简体中文, 繁体中文, 日本語, Français, Español, Italiano, Türkçe, Deutsch, Tiếng Việt, Русский, Čeština, 한국어, Indonesia

 
Top