Herself's Artificial Intelligence

Humans, meet your replacements.

Archive for the ‘computer vision’ Category

DeepBeliefSDK

without comments

This is very cool, iOS video image recognition.

I am totally convinced that deep learning approaches to hard AI are going to change our world, especially when they’re running on cheap networked devices scattered everywhere. I’m a believer because I’ve seen how good the results can be on image recognition, but I understand why so many experienced engineers are skeptical. It sounds too good to be true, and we’ve all been let down by AI promises in the past.

That’s why I’ve decided to release DeepBeliefSDK, an iOS version of the deep learning approach that has taken the computer vision world by storm. In technical terms it’s a framework that implements the full Krizhevsky stack of 60 million neural network connections, with a customizable top layer inspired by the Decaf approach. It does all this in under 300ms on an iPhone 5S, and in less than 20MB of memory. Here’s a video of me of me using the sample app to detect our cat!
More at Pete Warden’s Blog

GitHub, DeepBeliefSDK

Written by Linda MacPhee-Cobb

April 11th, 2014 at 11:50 am

Insight into fly vision may lead to better computer vision

without comments

New insight into how brains process visual information is a double edged sword. It will make for much better vision engines but with that will come the failure of our most popular human test at the moment — captcha.

Using a fly, whose brain is heavily coded for visual information, Nemenman and his colleagues were able to show information is passed along the spikes in the fly’s brain neurons.

. . .

Nemenman and his colleagues’ research is significant because it re-examines fundamental assumptions that became the basis of neuromimetic approaches to artificial intelligence, such as artificial neural networks. These assumptions have developed networks based on reacting to a number of impulses within a given time period rather than the precise timing of those impulses.

“This may be one of the main reasons why artificial neural networks do not perform anywhere comparable to a mammalian visual brain,” said Nemenman, who is a member of Los Alamos’ Computer, Computational and Statistical Sciences Division. “In fact, the National Science Foundation has recognized the importance of this distinction and has recently funded a project, led by Garrett Kenyon of the Laboratory’s Physics Division, to enable creation of large, next-generation neural networks.”

New understanding of neural function in the design of computers could assist in analyses of satellite images and facial-pattern recognition in high-security environments, and could help solve other national and global security problems. [ read more Language of a fly proves surprising ]

Papers:
PLoS: Neural Coding of Natural Stimuli: Information at Sub-Millisecond Resolution

More information:
Is Captcha’s moment passing?

Written by Linda MacPhee-Cobb

April 28th, 2008 at 5:00 am

Facial expression AI will help your computer to understand you

without comments

Ah, but do we really want our computers to understand us? Anybody remember ‘Clippy’?

Computer: “You seem depressed today, should I Google Dr Kevorkian for you?”

Or will the clerks at the local retail store start wearing cameras with emotion recognizing software? A bit of customer understanding by the help would go a long way in many a business.

Researchers at the Department of Artificial Intelligence (DIA) of the Universidad Politécnica de Madrid’s School of Computing (FIUPM) have, in conjunction with Madrid’s Universidad Rey Juan Carlos, developed an algorithm that is capable of processing 30 images per second to recognize a person’s facial expressions in real time and categorize them as one of six prototype expressions: anger, disgust, fear, happiness, sadness and surprise.

Applying the facial expression recognition algorithm, the developed prototype is capable of processing a sequence of frontal images of moving faces and recognizing the person’s facial expression. The software can be applied to video sequences in realistic situations and can identify the facial expression of a person seated in front of a computer screen. Although still only a prototype, the software is capable of working on a desktop computer or even on a laptop. [ read more Facial Expression Recognition Software ]

How ever emotion recognition software gets used in the future this software is bound to be fun.

More information:
Video of software in action

Papers:
Jose Miguel papers ( $ )
Facial Gesture Recognition Using Correlation and Mahalanobis Distance

See also:
Software recognizes anxiety in people

Written by Linda MacPhee-Cobb

April 17th, 2008 at 5:00 am

More cool robotic help for old foggies

without comments

The Japanese are really going to make it much more fun to age. What is really cool is the technology for these glasses is well known and already available.

. . .

Simply tell the glasses what you are looking for and it will play into your eye a video of the last few seconds you saw that item.

Built on to the glasses is a tiny camera which makes a constant record of everything the wearer sees: the tiny display inside the glasses identifies what is being scanned and a small readout instantly announces what the computer thinks the object probably is. For some things that look different from a range of angles, however, the glasses offer only a “best guess” – they are better at identifying a guitar and a chair than a coathanger or battery.
Related Links

The hardware itself is not extraordinary: what has taken Professor Kuniyoshi several years to perfect is the computer algorithm that allows the goggles to know immediately what they are seeing. It is, he says, a problem that has always vexed the fields of robotics and artificial intelligence.

But working in a team with Tatsuya Harada, one of Japan’s masters of the science of “fuzzy logic”, Mr Kuniyoshi believes he has cracked the problem. Behind the goggles is possibly the world’s most advanced object recognition software and a computer that can learn the identity of new objects within seconds.

So if the user wanders round the house for about an hour telling the goggles the name of everything from that coathanger to the kitchen sink, they will remember. Then if, at some point in the future, you ask them where you last saw a particular item, they will play the appropriate footage.

. . .

[ read more The glasses that can help you find anything ]

See also:
Robots for old boomers (UMass project aims to assist aging population )

Written by Linda MacPhee-Cobb

April 3rd, 2008 at 5:00 am

Cell phones with face recognition

without comments

I told you AI would be coming to your cell phone soon. Not only do cell phones come with powerful processors now but there are special circumstances that make cell phone AI both more practical and more interesting.

Cell phone cameras now auto tag the date and often GPS coordinates of pictures you take. The cell cameras also usually recognize when the photo contains a face. This is used to help with exposure and auto settings built into the camera.Because people photograph the same 30 or so people with their cell phones the face recognition software doesn’t have to learn many faces.

. . . With autotagging, the camera attaches tags as the pictures are taken. Today, cameras embed timestamps in photos, which makes it possible to sift through pictures by date. But be honest here–how reliably can you remember exactly when you took that picture of your darling daughter a year or two ago that you’d like to e-mail to her grandparents? Being able to screen for photos only of a particular person could dramatically speed up the search process.Face recognition requires computational horsepower that is hard to fit into the confines of a digital camera, but one company likely to help make it a reality is Fotonation, which already supplies face-detection software for dozens of camera models from Samsung, Pentax, and others. [ read more Up Next: Cameras that know who you photographed ]

More information:

FaceTracker Demonstrated for Mobile Phones

Papers:
Automated sorting of consumer image collections using peripheral region image classifiers ( $ ieee pdf )
A review of face recognition techniques for in-camera applications ( $ ieee pdf )
Automated indexing of consumer image collections using person recognition techniques ( $ ieee pdf )

See also:
Mobile phone smart network warns of intruders

Written by Linda MacPhee-Cobb

December 28th, 2007 at 5:00 am