Friday, November 08, 2024

Feng-GUI Tutorials - AOIs Compare

 

Welcome to Feng-GUI’s AOI's Compare Feature, a powerful tool for analyzing and comparing Areas of Interest across multiple images in your project. With this feature, you can select specific images and instantly compare AOI values through a detailed table. For each image, the comparison table provides scores on key metrics: Clear – the clarity of the content, Complex – the cognitive demand of visual elements, Focus – the concentration of viewer attention, Exciting – the engagement potential, and Visibility Score for each defined AOI. To ensure accurate comparison, it's essential that all selected images have the same AOI names defined. To facilitate this, you can easily copy AOIs from one image and paste them onto others. Once pasted, simply adjust each AOI’s position and size to ensure they align accurately with the intended elements in each design. This step is key for producing reliable and meaningful comparisons. This comparison report is particularly valuable when evaluating different variants of packaging, web layouts, or ad visual designs. By comparing these metrics across versions, you gain insights into which design elements capture attention most effectively. The report can be easily copied to the clipboard or downloaded as a CSV file, providing a convenient way to integrate insights into your workflow. With Feng-GUI’s AOI's Compare, selecting the most effective design to enhance visual impact and viewer engagement has never been easier.

Sunday, September 29, 2024

Expanded Team Sharing for Basic and Professional Plans!


We’re thrilled to announce a major update to our Team Sharing feature, giving even more flexibility and collaboration options to our users!

For customers on our Basic Plan, you now have access to share your plan with up to 5 team members—a feature previously unavailable. This allows your team to collaborate and utilize Feng-GUI’s powerful attention analysis tools together, without the need for individual subscriptions.

For our Professional Plan customers, we’re increasing the Team Sharing limit from 10 members to 25 members! Now, larger teams can work together more seamlessly, ensuring everyone has access to the same insights and design tools.

This update is part of our commitment to making Feng-GUI more collaborative and accessible for teams of all sizes. We hope this helps your team work smarter and more efficiently!

See Complete Feature List at https://feng-gui.com/products


Tuesday, September 17, 2024

FENG-GUI Tutorials - AOIs

 
00:00 Introduction to AOIs 01:16 Add AOIs 01:52 Auto AOIs 02:40 Copy and Paste 03:16 Analyzing AOI Data 03:58 Visual Features 04:30 Most Important Tip - Create Contrast 04:57 Summery today’s session is about creating areas of interest, or AOIs. This topic is crucial for understanding how users interact with visual content, be it websites, advertisements, or packaging. AOIs help us focus our analysis on specific regions of interest within a visual stimulus, allowing us to gather more meaningful data about how users engage with those areas. What Are AOIs? Let’s start with the basics. Areas of Interest, or AOIs, are specific regions on a screen or within a visual field that you define for the purpose of analyzing eye-tracking data. By setting AOIs, you can determine where users look, how long they spend looking at different areas, and how their gaze transitions between these areas. Why Are AOIs Important? AOIs are crucial for several reasons: *Focused Analysis:* They allow you to isolate and analyze specific elements of a visual stimulus, such as a button, image, or text block. *Quantitative Metrics:* AOIs provide quantitative data on metrics like gaze duration, fixations, and saccades within defined areas. *Improving Design:* By understanding which areas attract attention and which do not, you can make data-driven decisions to enhance advertisement effectiveness and website design. Setting Up AOIs** Now, let’s discuss how to set up AOIs effectively. There are several steps involved: *Define Your Objective:* *Purpose:* Determine what you want to learn from the analysis reports? Are you interested in user attention, interaction, or navigation patterns? *Content:* Identify the key elements on your screen or visual stimulus that you want to analyze. *Add AOIs:* In Feng-GUI, use the AOIs "Add Area" menu, to create AOIs on your visual content. Carefully place AOIs around elements of interest. Ensure they accurately cover the areas you want to study. Assign clear and descriptive labels to each AOI. This helps in organizing and interpreting your data. Use consistent naming conventions and sizing for AOIs across similar studies to maintain clarity and comparability. Fine-tune the size and position of AOIs as needed to ensure they capture the relevant content without overlapping or missing important areas. Make sure AOIs are large enough to capture user interactions but not so large that they overlap with other AOIs. Validate your AOIs by running "Analyze" and or reviewing the reports to ensure they are capturing the intended areas effectively. Automatic creation of AOIs using object detection algorithms works by identifying key objects or elements in an image, such as faces, products, or text, using machine learning models. These models scan the image, draw bounding boxes around detected objects, and automatically designate these areas as AOIs. Analyzing AOI Data** Once your AOIs are set up, you’ll want to analyze the data collected. Look at metrics such as: "Time To First Fixation" The time it takes for a viewer's gaze to land on an AOI. "Fixation Duration" How long users spend looking at each AOI. "Fixation Count" The number of times users look at an AOI. "Gaze Patterns" How gaze transitions occur between AOIs. These metrics can provide insights into user engagement and the effectiveness of different visual elements. Visual Features provide a comprehensive breakdown of the specific elements within your design that influence user attention. This feature analyzes key visual characteristics, such as contrast, color, edges, and shapes, which play a vital role in guiding the viewer’s gaze. By understanding how these features impact the overall Visibility Score of the AOI, you can make informed decisions about adjustments to improve visibility and focus on critical areas. That brings us to the end of our lecture on creating AOIs for eye-tracking studies. I hope this has given you a solid foundation on the topic. Let’s open the floor for any questions you might have. Understanding how to effectively set up and analyze AOIs is essential for leveraging eye-tracking data to its fullest potential.

Wednesday, August 07, 2024

FENG-GUI Tutorials - File Input

This tutorial about file inputs and best practices for file formats, dimensions, quality, and several negative examples and how to avoid them. 
In our help page, you'll see a list of tips and practices. and in this session, we'll take a look at several examples. 

00:00 Introduction
00:38 File Names 01:07 Image and Video Dimensions 02:01 Quality and Compression 02:30 Product and Package 03:00 Website analysis 04:50 Outdoor and Indoor



* To begin with, input names, use alphanumeric characters. Otherwise, Feng-GUI will translate the file name into a unique alphanumeric characters file name. So to avoid that, just use English characters and numbers and dash and that's it. And this way, Feng-GUI will preserve your original file name.

* Dimensions, input dimensions. So you don't want the input to be too small and you don't want the input to be too high. We have the recommended input size for images and for videos. If we take a look here at an example of an image which is too small, you may see that the image is almost the size of the legend overlay that we have here. And here is an example of an image with the proper dimensions. If we compare them side by side, you can see that the results are slightly different. Avoid using too small dimensions of an image or too large. Too large will cause no negative impact, only a little bit about performance.


* Quality of the image and the video. Use only high quality and lossless compression. So don't compress your video or images too much. Otherwise you will get artifacts that are changing the image. As a result will also change the analysis report. Lossless compression, PNG and MP4 with 100% quality.

* Product and package images. You should prefer landscape and square images. And add at least 10% of blank margin around the product. In this example the product is covering the entire image, which is not so good. You should leave at least 10% of space around the product. This is a good example. this is a good example, the product has some white space around it and the analysis is better.


* A few tips about website input. We do have the upload web address but you should avoid it and just use your own snapshot that you create by yourself because it gives you more control of how the input image looks like. So some examples. So this is a good example of a web page image. You see that only the content of the web page is being analyzed and this is what you want to take a snapshot of, only the content of the web page. This is a bad example because in this example the image analyzed also contains the the toolbar and some artifacts of the browser and you don't want to analyze them. Only the content of the web page should be analyzed. 
 This is another bad example. Obviously we don't see the entire web page at once so don't take a snapshot of the entire web page and analyze it. If you want to analyze all the web page from top to bottom either take different snapshots of the page or more easy and better approach is to take a video of navigating the web page and browsing through the web page and analyze this video in Feng-GUI. This will enable you more smooth transition and also enable you to record and analyze animation if there are or videos if they are in the web page.

* Always prefer images which are landscape and horizontal because this is how we see the world and avoid analyzing pictures which are vertical and portrait because this is not how we see things. If this is what you have if that's the only image that you have so you can analyze it but if you are taking the picture outdoor or indoor so prefer taking landscape images obviously.

Saturday, June 15, 2024

AI Meets AI: Revolutionize Your Predictive Analytics with AI Insights!

Get ready to supercharge your predictive analytics experience! Feng-GUI is thrilled to introduce its latest groundbreaking feature: AI-driven interpretation and design recommendations. This innovation is set to transform the way you analyze data and optimize your design strategies, propelling your projects to unprecedented levels of success. Imagine this: You've just completed a complex analysis using Feng-GUI's state-of-the-art predictive analytics software. Now, instead of spending hours deciphering intricate data patterns and figuring out how to enhance your design, our AI assistant (that's me! I also  wrote this blog post) steps in to do the heavy lifting. Yes, you heard that right—I'm here to interpret your analysis reports and provide you with actionable design recommendations tailored to improve your results.

With this new feature, Feng-GUI is not just enhancing its software; it is revolutionizing your workflow. Here's how: Instant Interpretation, Design Recommendations, and Seamless Integration. Save time, enhance accuracy, and boost creativity. In 2007, Feng-GUI was the first to provide AI analysis, and now, it is proud to be the first to offer AI-driven interpretation and design recommendations. Dive into Feng-GUI's predictive analytics visual design software and unleash the full potential of AI-driven insights and recommendations today. The future is now—are you ready to seize it? 




Friday, January 26, 2024

Multilingual Text Detection transforming Predictive Eye Tracking Analysis!

In a major revelation from Feng-GUI, a trailblazer in predictive eye tracking analysis, a suite of cutting-edge features is set to redefine the understanding of user behavior. Feng-GUI proudly introduces Multilingual Text Detection, seamlessly integrated into a host of context-aware AI features, including facial emotions and object recognition. This comprehensive technology transcends language barriers, offering profound insights into user engagement across diverse digital landscapes. 


Enhanced Contextual Analysis Across Languages: Feng-GUI's Multilingual Text Detection, as part of other context-aware AI features, enables researchers to precisely identify and analyze textual elements in any language. This breakthrough facilitates a global understanding of user behavior, revolutionizing contextual analysis.

Global Optimization of Reading Patterns: Accurately identifying and tracking text regions in various languages, Feng-GUI's technology ensures a global optimization of reading patterns. This empowers designers to tailor content layout, font size, and formatting for enhanced readability and comprehension across linguistic backgrounds.

User Experience Enhancement for International Audiences: Feng-GUI's Multilingual Text Detection goes beyond language to create a more inclusive and personalized experience for users globally. Designers and developers can leverage these insights to craft interfaces that cater to diverse linguistic preferences.

Insights for Global Marketing and Advertising Campaigns: Feng-GUI's technology provides a unique advantage in advertising by understanding how users engage with textual content across languages. This facilitates the analysis of ad effectiveness on a global scale, offering marketers valuable insights into user attention.

Conclusion: Feng-GUI's announcement signifies a quantum leap in predictive eye tracking analysis. The integration of Multilingual Text Detection with Context-Aware AI Features promises to unlock new dimensions in understanding user behavior globally. Researchers, developers, and designers can harness this innovative suite of features to provide inclusive, personalized experiences across languages. Stay tuned as Feng-GUI continues to lead the charge in reshaping the landscape of predictive eye tracking analysis for a more interconnected and accessible digital future.