Connect with us

General

New HTC Vive Headsets Argue Virtual Reality Is Still A Thing

Advertisement
Code: VZWDEAL. Enter this coupon code at checkout to get $100 discount on Samsung Galaxy Note 8. Includes free shipping. Restrictions may apply. Device payment purchase required.

I have no clue what actual real people think about virtual reality. Aside from the odd VR attraction at movie theater lobbies, VR to me has exclusively been something I’ve seen and tried out at events I go to as a games/tech journalist. And even within that savvy sphere folks seemed mixed on whether or not strapping a headset to your face to enter new worlds through the power of technology is a concept that’s ready for primetime.

However, out of all the VR headsets, the HTC Vive has also seemed the most promising. It provides the clearest, most comfortable images powered by beefy PCs. It’s a hardware initiative from Valve that didn’t immediately fizzle out. And it wasn’t developed by a fascist like former Oculus Rift beach boy Palmer Luckey. Now, new details at CES 2019 about a pair of upcoming Vive headsets make me even more confident that this is the VR hardware to get behind.

The Vive always seemed to position itself as the VR headset for peak performance, and that’s an important category. But if VR is going to become a true consumer technology it needs to be viable for all kinds of people at all kinds of price points for all kinds of uses. The Vive Cosmos looks like that more mainstream kind of headset.

This teaser video is brief, and HTC hasn’t put out that many more details, but the Vive Cosmos seems to be a much more streamlined VR headset. You don’t need to set up cumbersome tracking equipment in your room, just use the sleek hand controllers. You can flip up the visor perhaps for augmented reality features while seeing the real world, or maybe just for walking safely. Initially, you still need to somehow connect to a computer, as the device still demands a lot of graphical gaming power. But the promise of future smartphone connection teases a truly mobile headset, one that also takes advantage of the new Vive Reality System hub interface.

If you want the most advanced VR headset possible though, look toward the new Vive Pro Eye. The Vive Pro was already a technological leap over the original Vive. And the Vive Pro Eye adds a crucial new bit of functionality to upgrade the headset, and the concept of VR in general, even further. The Vive Pro Eye features eye-tracking, so the headset can tell what you’re looking at in virtual space.

So why is eye-tracking so useful? From a user standpoint, games and apps could use your vision to make menus more intuitive. Just look at the button you want to press. Maybe a survival horror game could hide a monster just outside of your peripheral vision? From a development standpoint though the big advantage is that you can spend more hardware resources rendering whatever the user is looking at, making that priority look as good as possible while blurring out the less necessary background. It’s a smarter use of technology, like when video games make faraway areas less detailed until you go to them, and could make the Vive Pro Eye seem even more powerful than it actually is.

Development kits for Vive Cosmos ship in early 2019 while the Vive Pro Eye ships in Q2. We wonder how well they’ll play with Valve’s own independent new Half-Life VR plans.

SOURCE

Advertisement
Code: LC10OFFSONEYR. Enter the code when you checkout to Save 10% Off Student Plans for 1 Year. Some restrictions may apply, see website for details.

General

New Florida Law Nixes Need for Autonomous Vehicle Operators

Advertisement
Samsung J7 V just $5 mo. New device payment purchase req'd. Plus, free shipping.

Florida Gov. Ron DeSantis last week signed a bill removing “unnecessary obstacles that hinder the development of autonomous vehicle technology”—including backup drivers.

The new law, which takes effect July 1, will allow a self-driving car (meeting all insurance requirements) to run without a human operator.

It also exempts occupants from laws against texting and other distractions.

“Signing this legislation paves the way for Florida to continue as a national leader in transportation innovation and technological advancement,” DeSantis said in a statement.

Flanked by smiling supporters, the governor on Thursday signed House Bill 311 at the SunTrax transportation center, used to test autonomous vehicles.

Also in attendance were Republican bill sponsors Sen. Jeff Brandes and Rep. Jason Fischer.

Florida Gov. Ron DeSantis signed a bill allowing autonomous vehicle tests with no human operator (via Gov. DeSantis Press Office)

“We here in Florida are pioneering the most exciting innovations in transportation,” Fischer said. “This bill on self-driving cars will usher in a new era of smart cities that will not only expand our economy but increase road safety and decrease traffic congestion.”

And permit “active display” of TV or video in the car.

The Sunshine State isn’t exactly breaking new ground: Last year, the California DMV introduced new regulations allowing automakers to test and deploy fully driverless vehicles.

We still have a long way to go, though, before folks can start napping behind the wheel.

High-profile accidents—an autonomous Uber struck and killed a pedestrian in Arizona; Tesla’s Autopilot feature was engaged at the time of a fiery Model X crash in California—have left some mistrustful of self-driving cars.

Others, meanwhile, are ready and willing to move forward with the unpredictable technology.

“Autonomous vehicles are the way of the future and Florida is leading the charge through the research, testing, and development of autonomous vehicles,” according to state Department of Transportation Secretary Kevin Thibault. “And now with this bill signed into law … Florida is ready to lead the nation with this innovative transportation advancement.”

Brandes agreed, adding that “With the signing of this legislation we reaffirm our bold commitment to lead the country as we transition to a shared, electric and driverless future.”

In February 2018, Ford unveiled a self-driving delivery pilot program that would see autonomous vehicles roaming the streets of Miami—notorious for its traffic congestion.

More on Geek.com:

SOURCE

Advertisement
Samsung J7 V just $5 mo. New device payment purchase req'd. Plus, free shipping.
Click this link to get $200 discount on iPad. Includes free shipping. Restrictions may apply.
Continue Reading

General

MIT Robot Learns to ID Objects by Sight, Touch

Advertisement
Offer valid for booking on Jet Airways mobile site and mobile app on Android, iOS, BlackBerry 10 and Windows 10 Point of sale : India ( One way and Round trip ) Available: Economy and premium

Humans’ five senses work together to reveal what we see, hear, smell, taste, and touch.

But robots are still learning to understand different tactile signals.

To move the process along, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed a predictive AI that can learn to see by touching, and learn to feel by seeing.

The system creates realistic signals from visual inputs to predict with which object it is making physical contact.

Using a KUKU robot arm and GelSight tactile sensor (designed by another group at MIT), researchers recorded nearly 200 objects—tools, household products, fabrics, etc.—being touched more than 12,000 times.

By breaking down those video clips into static frames, the team compiled a dataset of more than 3 million visual/tactile-paired images, known as “VisGel.”

“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge,” lead study author Yunzhu Li, a CSAIL PhD student, said in a statement.

“By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings,” he continued. “Bringing these two senses together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects.”

During testing, if the model was fed tactile data on a shoe, for instance, it could produce an image of where the shoe was most likely to be touched. The same goes for a computer mouse, box, cup, T-shirt, hammer—whatever its automated heart desires.

This type of ability, CSAIL said, could be useful for tasks in which there is no visual data: like when a light is off, or someone is indiscriminately reaching into an unknown area.

Moving forward, the team plans to increase the size and diversity of its dataset  by collecting input in more unstructured areas (i.e. outside a controlled environment). Or by using a new MIT-designed tactile smart-glove.

Even with the help of a sensor-packed mitt, there are certain details that can be tricky to infer from switching models—details that even humans can’t ascertain without using more than one sense. Like identifying the color of an object by touching it or determining how soft a sofa is without actually pressing on it.

CSAIL invented a similar system earlier this year: The “RoCycle” uses a soft Teflon hand covered in tactile sensors to detect an object’s size and stiffness—no visual cues necessary.

Basically, it squeezes cups, boxes, and cans to determine their makeup, and, ultimately, their recyclability.

A collaboration with Yale University, RoCycle demonstrates the limits of sight-based sorting; it can distinguish between two identical-looking Starbucks cups made of paper and plastic that would give vision systems (and the human eye) trouble.

More on Geek.com:

SOURCE

Advertisement
Code: VZWDEAL. Enter this coupon code at checkout to get $100 discount on Samsung Galaxy Note 8. Includes free shipping. Restrictions may apply. Device payment purchase required.
Click this link to get the Moto Z2 Force for just $31.50/month. Unlimited and device payment activation required. Includes free shipping. Restrictions may apply.
Continue Reading

General

Deepfake Tool Makes It Easy to Put Words Into Someone’s Mouth

Changing what someone says in a video is now as easy as “copy and paste.”

Researchers developed new software that uses machine learning to let users edit the text transcript of a video, altering the very words coming out of a person’s mouth.

The team—from Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research—envision their technology being used by film and television editors.

“Much like word processing, the editor could easily add new words, delete unwanted ones, or completely rearrange the pieces by dragging and dropping them as needed to assemble a finished video that looks almost flawless to the untrained eye,” according to a Stanford press release.

A new algorithm allows video editors to modify talking-head videos as if they were editing text—copying, pasting, adding and deleting words (via Stanford University)

The algorithm works best with talking-head videos, which show speakers only from the shoulders up; hand gestures and other body movements are a dead giveaway.

“The work could be a boon for video editors and producers but does raise concerns as people increasingly question the validity of images and videos online,” the authors said.

Say, for example, an actor flubs their line: The editor can simply rewrite the transcript, and the application will assemble the right word from various phrases spoken elsewhere in the recording.

It’s a bit like when surgeons transplant skin from one area of the body to another. It’s the skin grafting of video production.

[embedded content]

The machine-learning element then converts those sounds into a final video that appears natural to the viewer. Intelligent smoothing and Neural Rendering also work to create a photorealistic video in perfect lip-synch.

“Visually, it’s seamless. There’s no need to re-record anything,” lead researcher Ohad Fried, a postdoctoral scholar at Stanford, said in a statement.

In a crowd-sourced study with 138 participants, the team’s edits were rated as “real” almost 60 percent of the time. There is, of course, still room for improvement.

The algorithm currently requires at least 40 minutes of original video as input, and won’t yet work with just any sequence.

In an era of fake news, Internet hoaxes, and revenge porn, letting this technology fall into the wrong hands could be disastrous.

“This technology is really about better storytelling,” Fried said, acknowledging concerns about the software being used for illicit purposes.

Editing video is as easy as editing text (via Stanford University)

“Unfortunately, technologies like this will always attract bad actors,” he added. “But the struggle is worth it given the many creative video-editing and content-creation applications this enables.”

In an effort to curb rabble-rousers, researchers have proposed guidelines for using these tools that would alert viewers and performers that a video has been manipulated.

An opt-in watermarking system, perhaps, to identify edited content. Or digital/non-digital fingerprinting techniques.

None of these solutions are comprehensive, though; viewers must remain skeptical and cautious, Fried said.

The most pressing matter, he suggested, is to raise public awareness and education on video manipulation, so people are better equipped to question and assess synthetic content.

The full report—available online—will be published in the journal ACM Transactions on Graphics.

Watch: ‘Minority Report’-Like AI Can Detect Shoplifting

More on Geek.com:

SOURCE

Advertisement
Code: VZWDEAL. Enter this coupon code at checkout to get $100 discount on Samsung Galaxy Note 8. Includes free shipping. Restrictions may apply. Device payment purchase required.
Samsung J7 V just $5 mo. New device payment purchase req'd. Plus, free shipping.
Continue Reading

Deals

Advertisement
Code: VZWDEAL. Enter this coupon code at checkout to get $100 discount on Samsung Galaxy Note 8. Includes free shipping. Restrictions may apply. Device payment purchase required.
Samsung J7 V just $5 mo. New device payment purchase req'd. Plus, free shipping.
Click this link to get $200 discount on iPad. Includes free shipping. Restrictions may apply.

Trending