Monday, December 17, 2007

Artists look different

When doing Eye-Tracking survey, don't include artists, they may change the results by 20%.



A Norwegian study, that showed 16 pictures to both trained and untrained artists used eye-tracking software to show that not only do they see the world differently when drawing it, they also see differently when studying it.”



Read the full story at "Cognitive Daily":
http://scienceblogs.com/cognitivedaily/2007/03/artists_look_different.php

Friday, December 14, 2007

GuitarHeroNoid at LeWeb3 France


Kathy brooks from Six Apart and Loic Le Meur invited the GuitarHeroNoid to perform at LeWeb3 conference France.
Our man Tal Chalozin, the heronoid Puppeteer, went there and presented the garageGeeks activities.


Here is a video of the show

Wednesday, December 05, 2007

GuitarHeroNoid v2 at VON 2007 Boston

Mr Jeff Pulver invited the human-size robot playing guitar hero aka the GuitarHeroNoid to preform in VON Boston 2007 (Video Over the Net) conference in Boston.

Tal Chalozin and Yuval Tal have improved the robot version 2, adding some more features like the Controller-Controller and the Penis Guitar Holder.

1751968339_6de808d7e4_m.jpg1752818070_a63b8daf36_m.jpg1752819006_8e6d61a3b8_m.jpg

Pictures from VON
1795509517_6403bb66c1.jpg1795511861_1941d3d62d_m.jpg

Thursday, November 08, 2007

Tarazi Design Studio: Per Capita

Garanti Gallery Istanbul is proudly be announcing the exhibition on the works of Israeli industrial product designer Ezri Tarazi entitled "TARAZI DESIGN STUDIO: PER-CAPITA" between 23 October and 15 December 2007.



Feng-GUI lab created the video installation at "Per-Capita" that traces the signs placed on the head of each person visiting the exhibition. The visitors wear over their heads a piece of cloth marked by a red routing cross.
A tracking camera 'identifies' the sign, tracks it down and puts a routing cross on it. By screening the "tracking map" on the gallery wall, the background images constantly shift producing a different context for each situation. In one situation the exhibition visitors become targets of targeted killing, and in the other situation they are individuals in a video game.



PER-CAPITA places in front of the visitor/the participant in the exhibition futuristic, intriguing questions raised by the twentieth century. Will nationality keep its presence and relevance? Will the planet "raft" be able to go on carrying its inhabitants? What sort of relevance will hierarchy have? What power reservoirs will be required in order to connect with the different? Will Jerusalem stay united?
Will religiosity become a destructive or productive factor? Where will the wells of salvation come from?

Per Capita review at Designophy

from Ezri Tarazi blog:

On the opening moments of my exhibiting PER CAPITA in Istanbul at the Garanti Gallery, we found ourselves in the midst of a huge demonstration against TEROR.
Few days before the opening 17 people were killed on the east side of Turkey by the PKK, the Kurdish organization. Just in front of the gallery, thousand of people were
marching on the Istiklal road, the main street of Istanbul.



I was lecturing on that day at the Istanbul Technical University about the concept of Realism and Reality Design. I did not imagine that it will become so expressive. We had the HEART 'flags' from the interactive installation on the show, so we took them and waved with them on the street. People were responded to it in consent. I assume they have accepted it as a peculiar way to demonstrate the superfluous use of terror and bloodshed in the world.


The RED of the HEART 'flags' were exactly the same RED of the Turkish flag. At the heart of the gallery we showed 'crowded', the sofa made of sections of national flags. It had an interesting inter locution with the flags and loud voices out side of the gallery.

Per Capita became at once from a philosophical overview to a REALITY DESIGN event mixing with the powers of people out on the streets of Istanbul.

Monday, October 22, 2007

Internet users quick to judge

By Judy Skatssoon for Science Online

Internet users can take just one-twentieth of a second to decide whether they like the look of a website, researchers say.

Dr Gitte Lindgaard and colleagues from Carleton University in Ottawa flashed up websites for 50 milliseconds and asked participants to rate them for visual appeal.



When they repeated the exercise after a longer viewing period, the participants' ratings were consistent.

"Visual appeal can be assessed within 50 milliseconds, suggesting that web designers have about 50 milliseconds to make a good impression,"

the Canadians report in the journal Behaviour & Information Technology.

Associate Professor of psychology Bill von Hippel, from the University of New South Wales, says it takes about 50 milliseconds to read one word, making this a "stunningly remarkable" timeframe in which to process the complex stimuli on a website.

"It's quite remarkable that people do it that fast and that it holds up in their later judgement," he said.

"This may be because we have an affective or emotional system that [works] independently of our cognitive system."

He says that in evolutionary terms, this ability helped us respond rapidly to dangerous situations.

full article at ABC News

Wednesday, October 03, 2007

SIFT is the core of PhotoSynth


PhotoSynth underlying "magic" is using SIFT (Scale Invariant Feature Transform) http://en.wikipedia.org/wiki/Scale-invariant_feature_transform

quoting Seitz
"We use a feature-matching technique called SIFT, developed by David Lowe at the University of British Columbia, that handles very significant differences in lighting, shading, weather, scale, and so forth,"
http://www.spokesmanreview.com/blogs/txt/archive/?postID=1454

David Lowe's Autostitch project.
http://www.cs.ubc.ca/~lowe/home.html
http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html

more SIFT implementations and source code

Tuesday, September 11, 2007

Tangible Kabuki at GeekCon 2007



This was Feng-GUI GeekCon2007 project. Thanks to:
Project members: Rafael Mizrahi, Dani Vardi, Tal Yaniv and Eyal "Person" Shachar.
Performers: Dror Gill, Ayelet Yagil, Zvi Devir and Jeff Pulver.
Photographers: Yaniv Golan and Alex Sirota.

1. The TangibLaptop is a collaborative electronic music instrument with a laptop tangible multi-touch interface built by the GarageGeeks (Rafael Mizrahi, Ohad Pressman and Eyal "Person"). inspired by the reactable http://en.wikipedia.org/wiki/ReacTable



2. The GeekCon un-convention get-together is a creative gathering happening with a goal to form a critical mass of technically oriented creative talented people that will think up, create and deploy ideas. Or it’s a short summer camp for geeks… Take your pick.




1+2=3. Taking it one step further, instead of using physical small hand size objects, YOU will be the players in a musical part by wearing masks. Together, we create a musical part where you can take part by wearing fiducial masks detected by the Tangible laptop.




Reflections:
More Tangible Kabuki images at Flikr tagged with "Tangible Kabuki"

More GeekCon 2007 images at Flikr tagged with geekcon2007

Tuesday, August 28, 2007

Banner Blindness by Dr. Jakob Nielsen


An article by Dr. Jakob Nielsen from http://www.useit.com

The most prominent result from the new eyetracking studies is not actually new. We simply confirmed for the umpteenth time that banner blindness is real. Users almost never look at anything that looks like an advertisement, whether or not it's actually an ad.


Users rarely look at display advertisements on websites. Of the four design elements that do attract a few ad fixations, one is unethical and reduces the value of advertising networks.


read the full article...

Sunday, August 26, 2007

website is under attack



The website is under attack since 26/8 21:30 and the service is unavailable.
Hope to return soon while increasing our scalability.

Thank you for your interest in Feng-GUI.

Feng-GUI at The 40 coolest free applications around

Seopher: reviewing and discussing Linux, internet marketing and blogging,
recommends Feng-GUI ViewFinder heatmap service in a list of 40 cool and free applications.

http://seopher.com/articles/the_40_coolest_free_applications_around

thanks Seopher!

Feng-GUI at Israel NRG newspaper


Feng-GUI at Israel NRG newspaper
http://www.nrg.co.il/online/10/ART1/614/158.html

thanks to Ophir Hechter for the news.

Tuesday, August 07, 2007

Issues of Saliency and Recognition in the Search for Web Page Bookmarks


Alex has published his great Masters Thesis on:
"Issues of Saliency and Recognition in the Search for Web Page Bookmarks"

to provide empirically-determined guidelines for web producers on how to title pages in order to optimise the recognition of bookmarks by users, and to increase the rate of revisitation as a result.

Sunday, July 08, 2007

Google Images Search for faces

Google didn't buy riya and bought neven vision adding a face recognition to the Images Search service.
For example you can search for Paris in general
or search for Paris Hilton
Just add the imgtype=face to the search query.

Sunday, June 24, 2007

Monday, May 28, 2007

Google eye-counting video camera


Google unveiled an eye-counting video camera that could enable the company to extend its highly successful online business model to brick-and-mortar advertisers.

The Eyebox was developed by Xuuk Inc.(Kingston, Ontario).

Using its PageRank technology, Google (Mountain View, Calif.) has been able to collect revenue from advertisers based on the number of ads on which people are clicking.

Now with the Eyebox, Google can determine which billboards or products people are looking at (32 feet range) in mall corridors or on store shelves, and count them in the same manner that Google counts clicks for online ads.

Tuesday, May 15, 2007

Good-Gaze Attention heatmap


Stan, from lijit, just mailed me about a new Visual Attention service Germany called Good-Gaze.

goodgaze team are from the Cognitive Science department at Osnabrück university, Germany.

I have registered good-gaze service, and looking forward to see their heatmaps.
thanks! Stan

Friday, May 04, 2007

tobii new generation eye tracking


Tobii
announced its launch of a new generation eye tracking hardware and analysis software.

Fundamental technology advances and new tools facilitate use of eye tracking and add substantial new values to usability and user experience studies. Making up a complete lab solution, the new products will be presented at CHI in San Jose CA, April 29.

Wednesday, May 02, 2007

15 must have web developer tools for beginners

Andrew Sellick, a Lead Interactive Developer working for a Digital Marketing agency called Green Cathedral. have put Feng-GUI ViewFinder service as one of the “15 must have web developer tools for beginners“
We are honored to be in one list with the other tools.
We will continue improving the ViewFinder heatmap service as the world’s first free digital cortex.

Surprising Studies of Visual Awareness

viscog - VisCog Productions and Visual Cognition Lab have released this amazing dvd which includes the famous "gorilla/basketball" video in 2003.

More clips can be found at the Lab:
http://viscog.beckman.uiuc.edu/djs_lab/demos.html

Wednesday, April 25, 2007

We need a Digital Cortex


Stan James (founder and CTO of Lijit Networks)
on how the world of information need a digital cortex, and the Attention role in the relations between consumer, publisher and advertiser.

Attention to dollars, and other exchanges

We need a Digital Cortex

Digital Cortex 2 - Information overload in the brain

Monday, April 23, 2007

SpikeNet human visual system


SpikeNet uses processing algorithms that are directly inspired by the strategies used by the human visual system which outperforms even the most sophisticated machine vision systems. Indeed, the human visual system is able to analyse a complex scene in a fraction of a second.
 

Tuesday, April 17, 2007

Foveon X3 Technology

foveon.com
A digital camera should see color the way the human eye does.

"It's easy to have a complicated idea," Carver Mead used to tell his students at Caltech. "It's very, very hard to have a simple idea."

The genius of Carver Mead is that over the past 40 years, he has had many simple ideas. More than 50 of them have been granted patents, and many involved him in the start-up of at least 20 companies, including Intel. Without the special transistors he invented, cell phones, fiber-optic networks, and satellite communications would not be ubiquitous. Last year, high-tech high priest George Gilder called him "the most important practical scientist of the late 20th century."
"Nobody," Bill Gates once said, "ignores Carver Mead."



X3 is the latest and most innovative product from Foveon Inc., the Silicon Valley digital-imaging company that Mead, 68, founded in 1997. Named for the fovea centralis—the part of the human retina where vision is sharpest and most color perception is located Foveon took as its mission another radically simple idea Mead loves: "Use all the light."

Monday, April 16, 2007

ViewFinder FireFox AddOn

Create webpage heatmaps directly from your FireFox.

Install Extension Here
or Download the ViewFinder heatmap extension for FireFox.

sample heatmap of firefox homepage
ViewFinder extension page at FireFox

Friday, April 13, 2007

New feature - upload image file

We have added a new feature to the ViewFinder heatmap service.
Upload image file - You can upload image files from your computer and see their visual attention heatmap.
Image formats: png (recommended), jpg, gif and bmp
Image size: 50-500 KB

Sunday, March 18, 2007

GuitarHeroNoid

The reason I haven't been blogging for some time, is this robot I built with the GarageGeeks guys. Bringing our image processing and visual attention skills from http://www.Feng-GUI.com to the brain of the GuitarHeroNoid, a robot who play the PlayStation GuitarHero game.

more images at flickr
geek power!








Guitar HeroNoid first live show at KinnerNet2007 dining room.

Sunday, February 25, 2007

Biologically Inspired Vision Systems

Neuroscientists at MIT have developed a computer model that mimics the human vision system to accurately detect and recognize objects in a busy street scene, such as cars and motorcycles.

"Maybe we shouldn't be surprised," says David Lowe, a computer vision and object recognition expert at the University of British Colombia in Vancouver. "Human vision is vastly better at recognition than any of our current computer systems, so any hints of how to proceed from biology are likely to be very useful."



The article:
http://www.technologyreview.com/Infotech/18210/

The lab:
http://web.mit.edu/bcs/research/

Monday, February 19, 2007

ViewFinder heatmap for videos

Feng-GUI lab is pleased to announce on the video edition of the ViewFinder heatmap.

for example, see how the following videos are being heated by ViewFinder attention heatmap.

Coca-Cola GTA


The Matrix II trailer


Heineken commercial


Mission Impossible trailer


more info at the research page

Thursday, February 15, 2007

Honour mustard Internet TV

Honour mustard is a show, which is produced specially and exclusively for Internet. We do here without superlative. Look simply whether you find something similar in the German-speaking countries somewhat.

and ViewFinder is presented at 2:30min
http://www.ehrensenf.de/2007/02/09/versteckte-songs-afro-frisuren-finetune/?vid=flv

thank you, Katrin kommt.

Sunday, February 11, 2007

2003 Kai Gradert and Phil Clevenger


Cofounders Kai Gradert and Phil Clevenger have designed and developed over 20 award-winning commercial software applications, games, and communication technologies.

For Cooperating Systems, they have hand-picked an experienced global team to develop and support CoSI products and technologies, and to establish a leading market presence as creators and publishers of innovative yet practical networked applications.

Take a look at Kai Gradert gallery of work

Eric Wenger creator KPT Bryce


U&I Software http://www.uisoftware.com was formed by artists seeking to create tools that would allow them (and by extension other likeminded individuals) to explore new realms of creative expression. Founded in 1997 by Eric Wenger creator of KPT Bryce

Tuesday, February 06, 2007

Google Portrait


Google Portrait is a demonstration system of IDIAP and Torchvision face detection technology. It is for personal and non-commercial use only. We acknowledge Google for providing the image indexing and retrieval service and we garantee that we don't perform any automated querying. Indeed, the query made by a user is equivalent to a query made directly on Google Image and the face detection processing is done "on-the-fly". Please note also that we don't make use of the page ranking information, but only the url of images indexed and retrieved by Google.

Watch out Riya! :)

check it out at: http://www.idiap.ch/googleportrait/

for example: A search for Michael Arrington
http://www.idiap.ch/googleportrait/index.cgi?query=Michael+Arrington

Wednesday, January 31, 2007

Object Detection mantra by HCK

found this nice object detection mantra from a user called HCK:

Poor evaluation results + good training results + small number of weak classifiers + much training data = too uniform data.

Poor evaluation results + better training results + large number of weak classifiers + much training data = data has too much variation.

Poor evaluation results + poor training results + large number of weak classifiers + much training data = weak classifiers are too weak.

Saturday, January 13, 2007

Interview with Sebastien Billard

Google auto-translation from French to English
Of
http://s.billard.free.fr/referencement/index.php?2007/01/12/342-interview-avec-rafael-mizrahi-feng-gui


The presentation of ViewFinder 2 days ago had raised some comments and interrogations. Rafael Mizrahi, director of technology at Feng-GUI, and creator of the algorithm agreed to answer my questions.

Sebastien Billiards: Hello Rafael, could you present yourselves at the readers? Rafael Mizrahi: I have worked in data-processing industry for more than 16 years. Playing of the music and making painting, I always had a strong sensitivity for the harmony. These two aspects of my personality naturally led me to the study of the user interfaces, a branch of data-processing research.

SB: When started to develop this algorithm?
RM: I undertook research on the composition dynamic and taught during the 10 last years. The implementation of ViewFinder strictly speaking did not begin that approximately 2 years ago.

SB: Which research did you use to develop ViewFinder?
RM: The question is often asked to us, this is why we will add more information on the site on this subject. But if I were to summarize in only one word, I will say: the salience (NdT: i.e. capacity of an element to be arisen at the time of visual perception of a scene, at the point to take a particular cognitive importance).
More information in this Powerpoint
http://taln.limsi.fr/site/talnRecital05/session9/landragin.ppt

The ViewFinder algorithm creates a map of salience site. The charts of salience were developed during the 25 last years by the research laboratories on the numerical vision. The algorithm was developed then compared with the experimental results of research on the movements of the glance, in order to accurately represent the way in which the human ones are attracted by the visual ones.

SB: Does your algorithm analyze only contrasts, or takes it in account other stimuli or behaviors?
RM: ViewFinder takes into account contrasts, but also the colors, the movements, textures, flows as well as other criteria, with an aim of behaving like an eye and a brain (model “bottom-up”, eye towards the brain). We also work to include in the algorithm of the capacities of detection of the texts and the faces, which are key elements of the attention at the human ones (model “top-down”).

SB: What do you hear exactly by “flow”?
RM: That they are flow, movements, of textures, all that report/ratio with the reasons which one can find in the images. For example, take a car of small size (let us say 2% of the surface of the image) according to a road with mountainside. The algorithm of detection of the movement included in ViewFinder can identify this car, because it breaks the fluidity of the texture of the mountain.
SB: And concerning the text, do you speak to analyze the direction of the texts, or only their appearance?

RM: The detection of the texts (in fact their localization) as that of the faces are used to determine the places posting of the text and the faces. They are algorithms of classification, which locate reasons, but do not try to compare them with a data base biometric or to carry out a character recognition. It is thus a question just of knowing that there is something of interesting at a given place.

SB: Your tool often suggests a visual attention paid to the edges, whereas these zones are empty. Is it about a bug? of an artifact?

RM: Indeed, a certain number of people pointed out it to us, and we think of providing examples and of explaining these results. It is not a bug. Very often they are areas presenting a strong contrast with the interior zone, and these zones attract your attention, even if it is in a subliminal way and that they do not contain anything significant. As underlines it the article “Psychology of the form and dynamic symmetry”, the rate/rhythm is in the time what symmetry is with space.


SB: Your tool does not analyze the direction, i.e. meant elements. Up to what point content of the texts or the images does it affect the visual attention? Does the visual attention depend on what is represented, or depends it only on the way in which the things are represented?
RM: The attention can as well be reflexive, impulsive (“bottom-up”) that cognitive, related to the context (“top-down”). It depends at the same time on “how” and “what”.


Take this example: you lead the night, on a circular road. On this road, a car is parked, with its lit indicators. Your attention is drawn by these light which ignites and die out, once (“bottom-up”). You carry on your road, and start to be unaware of these lights (“top-down”) because you know that this car will not have any more an influence on you. It is right a car parking itself. SB: Which are the future projects and developments?
RM: Our Company Feng-GUI has as a specialty the perception of the visual ones, which it is of attention or attraction. Our business model is to develop various applications of Viewfinder, for then integrating them in the products of companies leaders such as Apple, Adobe, Google, Yahoo, etc.


http://taln.limsi.fr/site/talnRecital05/session9/landragin.ppt

http://www.public.asu.edu/~detrie/msj.uc_daap/article.html