Connect with us

General

This Unique Font Uses Psychology to Help You Remember

Advertisement
Samsung J7 V just $5 mo. New device payment purchase req'd. Plus, free shipping.

Everyone has different techniques for learning new things—create a mind map, employ mnemonics, join a study group, use a specific typeface created to help people retain more information and better remember notes.

A world first, Sans Forgetica combines psychological theory and design principles to improve recollection of written text.

The font was developed by researchers and academics from Australia’s RMIT University using the learning principle of “desirable difficulty,” where an obstruction adds to the learning process.

The idea is that, by removing segments of each character, our brains have to work just a little bit harder to process what we’re reading. Which leads to better memory retention and promotes deeper cognitive processing.

[embedded content]

Typical fonts—like Helvetica, Arial, Times New Roman, Goudy, Futura, Bakserville—are “familiar,” according to senior marketing lecturer and founding member of the RMIT Behavioral Business Lab, Janneke Blijlevens.

“Readers often glance over them and no memory trace is created,” she said.

But make the typeface too different, and the brain basically rejects it.

“Sans Forgetica lies at a sweet spot where just enough obstruction has been added to create that memory retention,” Blijlevens added.

Two key subversive elements (backslanting and gapping) force folks to fill in those voids and slow down the reading process, giving the brain more time to engage.

[embedded content]

During an online experiment featuring more than 100 university students and three new fonts, Sans Forgetica “broke just enough design principles without becoming too illegible and aided memory retention,” according to RMIT.

“We believe this is the first time that specific principles of design theory have been combined with specific principles of psychology theory in order to create a font,” behavioral economist Jo Peryman, chair of the RMIT Behavioral Business Lab, said.

Aimed at students bent over their computers and cramming for exams, Sans Forgetica is available to download for free as a font and Chrome browser extension.

“Sans Forgetica has the potential to be far-reaching, beyond the classroom, to a vast range of different people wanting to remember those things that are important to them in their lives,” Blijlevens said.

Mike Parker, progenitor of the Helvetica font, died in 2014. Around the same time, a 14-year-old discovered that the government could save $400 million each year by changing typefaces. Read more about typography and psychology on Geek.com.

SOURCE

Advertisement
Samsung J7 V just $5 mo. New device payment purchase req'd. Plus, free shipping.
Code: VZWDEAL. Enter this coupon code at checkout to get $100 discount on Samsung Galaxy Note 8. Includes free shipping. Restrictions may apply. Device payment purchase required.

General

New Florida Law Nixes Need for Autonomous Vehicle Operators

Advertisement
Samsung J7 V just $5 mo. New device payment purchase req'd. Plus, free shipping.

Florida Gov. Ron DeSantis last week signed a bill removing “unnecessary obstacles that hinder the development of autonomous vehicle technology”—including backup drivers.

The new law, which takes effect July 1, will allow a self-driving car (meeting all insurance requirements) to run without a human operator.

It also exempts occupants from laws against texting and other distractions.

“Signing this legislation paves the way for Florida to continue as a national leader in transportation innovation and technological advancement,” DeSantis said in a statement.

Flanked by smiling supporters, the governor on Thursday signed House Bill 311 at the SunTrax transportation center, used to test autonomous vehicles.

Also in attendance were Republican bill sponsors Sen. Jeff Brandes and Rep. Jason Fischer.

Florida Gov. Ron DeSantis signed a bill allowing autonomous vehicle tests with no human operator (via Gov. DeSantis Press Office)

“We here in Florida are pioneering the most exciting innovations in transportation,” Fischer said. “This bill on self-driving cars will usher in a new era of smart cities that will not only expand our economy but increase road safety and decrease traffic congestion.”

And permit “active display” of TV or video in the car.

The Sunshine State isn’t exactly breaking new ground: Last year, the California DMV introduced new regulations allowing automakers to test and deploy fully driverless vehicles.

We still have a long way to go, though, before folks can start napping behind the wheel.

High-profile accidents—an autonomous Uber struck and killed a pedestrian in Arizona; Tesla’s Autopilot feature was engaged at the time of a fiery Model X crash in California—have left some mistrustful of self-driving cars.

Others, meanwhile, are ready and willing to move forward with the unpredictable technology.

“Autonomous vehicles are the way of the future and Florida is leading the charge through the research, testing, and development of autonomous vehicles,” according to state Department of Transportation Secretary Kevin Thibault. “And now with this bill signed into law … Florida is ready to lead the nation with this innovative transportation advancement.”

Brandes agreed, adding that “With the signing of this legislation we reaffirm our bold commitment to lead the country as we transition to a shared, electric and driverless future.”

In February 2018, Ford unveiled a self-driving delivery pilot program that would see autonomous vehicles roaming the streets of Miami—notorious for its traffic congestion.

More on Geek.com:

SOURCE

Advertisement
Samsung J7 V just $5 mo. New device payment purchase req'd. Plus, free shipping.
Foot Locker
Continue Reading

General

MIT Robot Learns to ID Objects by Sight, Touch

Advertisement
Click this link to get 40% discount on Top Paw dog sweaters, coats, tees and dresses. Shipping is free on orders over $49. Restrictions may apply.

Humans’ five senses work together to reveal what we see, hear, smell, taste, and touch.

But robots are still learning to understand different tactile signals.

To move the process along, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed a predictive AI that can learn to see by touching, and learn to feel by seeing.

The system creates realistic signals from visual inputs to predict with which object it is making physical contact.

Using a KUKU robot arm and GelSight tactile sensor (designed by another group at MIT), researchers recorded nearly 200 objects—tools, household products, fabrics, etc.—being touched more than 12,000 times.

By breaking down those video clips into static frames, the team compiled a dataset of more than 3 million visual/tactile-paired images, known as “VisGel.”

“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge,” lead study author Yunzhu Li, a CSAIL PhD student, said in a statement.

“By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings,” he continued. “Bringing these two senses together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects.”

During testing, if the model was fed tactile data on a shoe, for instance, it could produce an image of where the shoe was most likely to be touched. The same goes for a computer mouse, box, cup, T-shirt, hammer—whatever its automated heart desires.

This type of ability, CSAIL said, could be useful for tasks in which there is no visual data: like when a light is off, or someone is indiscriminately reaching into an unknown area.

Moving forward, the team plans to increase the size and diversity of its dataset  by collecting input in more unstructured areas (i.e. outside a controlled environment). Or by using a new MIT-designed tactile smart-glove.

Even with the help of a sensor-packed mitt, there are certain details that can be tricky to infer from switching models—details that even humans can’t ascertain without using more than one sense. Like identifying the color of an object by touching it or determining how soft a sofa is without actually pressing on it.

CSAIL invented a similar system earlier this year: The “RoCycle” uses a soft Teflon hand covered in tactile sensors to detect an object’s size and stiffness—no visual cues necessary.

Basically, it squeezes cups, boxes, and cans to determine their makeup, and, ultimately, their recyclability.

A collaboration with Yale University, RoCycle demonstrates the limits of sight-based sorting; it can distinguish between two identical-looking Starbucks cups made of paper and plastic that would give vision systems (and the human eye) trouble.

More on Geek.com:

SOURCE

Advertisement
Code: VZWDEAL. Enter this coupon code at checkout to get $100 discount on Samsung Galaxy Note 8. Includes free shipping. Restrictions may apply. Device payment purchase required.
5-Star Travel Insurance Starting From $0.41/Day!
Continue Reading

General

Deepfake Tool Makes It Easy to Put Words Into Someone’s Mouth

Advertisement
Sams Club

Changing what someone says in a video is now as easy as “copy and paste.”

Researchers developed new software that uses machine learning to let users edit the text transcript of a video, altering the very words coming out of a person’s mouth.

The team—from Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research—envision their technology being used by film and television editors.

“Much like word processing, the editor could easily add new words, delete unwanted ones, or completely rearrange the pieces by dragging and dropping them as needed to assemble a finished video that looks almost flawless to the untrained eye,” according to a Stanford press release.

A new algorithm allows video editors to modify talking-head videos as if they were editing text—copying, pasting, adding and deleting words (via Stanford University)

The algorithm works best with talking-head videos, which show speakers only from the shoulders up; hand gestures and other body movements are a dead giveaway.

“The work could be a boon for video editors and producers but does raise concerns as people increasingly question the validity of images and videos online,” the authors said.

Say, for example, an actor flubs their line: The editor can simply rewrite the transcript, and the application will assemble the right word from various phrases spoken elsewhere in the recording.

It’s a bit like when surgeons transplant skin from one area of the body to another. It’s the skin grafting of video production.

[embedded content]

The machine-learning element then converts those sounds into a final video that appears natural to the viewer. Intelligent smoothing and Neural Rendering also work to create a photorealistic video in perfect lip-synch.

“Visually, it’s seamless. There’s no need to re-record anything,” lead researcher Ohad Fried, a postdoctoral scholar at Stanford, said in a statement.

In a crowd-sourced study with 138 participants, the team’s edits were rated as “real” almost 60 percent of the time. There is, of course, still room for improvement.

The algorithm currently requires at least 40 minutes of original video as input, and won’t yet work with just any sequence.

In an era of fake news, Internet hoaxes, and revenge porn, letting this technology fall into the wrong hands could be disastrous.

“This technology is really about better storytelling,” Fried said, acknowledging concerns about the software being used for illicit purposes.

Editing video is as easy as editing text (via Stanford University)

“Unfortunately, technologies like this will always attract bad actors,” he added. “But the struggle is worth it given the many creative video-editing and content-creation applications this enables.”

In an effort to curb rabble-rousers, researchers have proposed guidelines for using these tools that would alert viewers and performers that a video has been manipulated.

An opt-in watermarking system, perhaps, to identify edited content. Or digital/non-digital fingerprinting techniques.

None of these solutions are comprehensive, though; viewers must remain skeptical and cautious, Fried said.

The most pressing matter, he suggested, is to raise public awareness and education on video manipulation, so people are better equipped to question and assess synthetic content.

The full report—available online—will be published in the journal ACM Transactions on Graphics.

Watch: ‘Minority Report’-Like AI Can Detect Shoplifting

More on Geek.com:

SOURCE

Advertisement
Code: VZWDEAL. Enter this coupon code at checkout to get $100 discount on Samsung Galaxy Note 8. Includes free shipping. Restrictions may apply. Device payment purchase required.
Go through this link and get shipping on Continental tires. No code needed. Restrictions may apply. See website for more details.
Continue Reading

Deals

Advertisement
Ввод промо-кода не требуется. Акция доступна для всех пользователей сервиса.
B2Tidebuy Hot Selling Product Madness Sale! Lace Patchwork See-Through Work Dress, Enjoy $2 Off No Minimum, Code: B2, Shop Now!

Trending