Quantcast
Channel: 平板电脑
Viewing all 500 articles
Browse latest View live

RealSense Unity Toolkt and Unity 5. Unity 5 needs 64 bit DLLs

$
0
0

While experimenting with RealSense and Unity 5 the other day, I discovered that I was getting the following error.

Failed to load 'Assets/Plugins/libpxccpp2c.dll', expected 64 bit architecture (IMAGE_FILE_MACHINE_AMD64),but was IMAGE_FILE_MACHINE_I386.....

What this means is that the libpxccpp2c.dll getting distributed by the RealSense Unity Toolkit is the 32 bit version of the DLL.  There is a very simple fix which is to simply copy the 64bit version and replace the 32 bit version of the DLL into Assets/Plugins.

You can find the 64bit version at <install folder>\RSSDK\bin\64bit.

 

图标图像: 

  • 开发工具
  • 游戏开发
  • 英特尔® 实感™ SDK
  • 英特尔® 实感™ 技术
  • C#
  • 英特尔® 实感™ 技术
  • 前置 F200 照相机
  • 笔记本电脑
  • 平板电脑
  • 开发人员
  • Microsoft Windows* 8
  • 主题专区: 

    IDZone

    包括在 RSS 中: 

    1
  • 高级
  • 入门级
  • 中级

  • Facial, Gesture, and Voice Recognition Go Social with Finding BBB

    $
    0
    0

    Download Document

    By John Tyrrell

    Introduction

    Finding BBB* is an Intel® Perceptual Computing Challenge award-winning game created by TheBestSync, a development studio based in Guangzhou, China. Finding BBB combines the functionality of Intel® RealSense™ technology capabilities—including facial, voice, and gesture recognition—with Facebook* connectivity, personalization, player-created missions, and location-based services. This combination of elements delivers what TheBestSync believes to be an innovative and intuitive user experience.


    Figure 1: Players use gesture controls to navigate their bee through the 3D environment.

    Finding BBB is a freemium social interactive game that builds on the studio’s previous titles BBB GOAL* and JOY*. JOY was a Phase 1 Grand Prize winner in the Intel® Perceptual Computing Challenge. While the game focuses on implementation of the Intel® RealSense™ SDK UI, TheBestSync team has also enabled traditional mouse and keyboard controls and a touch interface for two reasons: first, because gesture and voice recognition are not yet widely used in the gaming industry and are unfamiliar to many players; and second, because it’s important to give players a choice. Players can also use multiple input modalities in parallel, giving them ample freedom in how they play the game.


    Figure 2: In addition to gesture, the player can use keyboard controls if desired.

    Optimizations and Challenges

    Gesture Control

    One of the most significant parts of the optimization work that the team undertook was related to interpreting the data provided by the Intel RealSense SDK. They discovered that the extremely high level of responsiveness and accuracy of the Intel® RealSense™ 3D camera—and the precision and speed of the resulting data—created an unexpected situation: The slightest natural movement of the hand was being translated into sometimes unnecessary movements of the game character. To curtail these extra movements, the team coded a filter that limited the sampling rate on the data stream and thus smoothed the hand movements that the Intel RealSense SDK delivered to the game. This ultimately stabilized the characters’ on-screen movements to a point that was acceptable to the player and visually satisfying in the context of the game environment.


    Figure 3: Players are shown step-by-step how to use the gesture controls.

    Through the testing process, the team was also surprised to discover that the system they implemented for controlling the bee character in-game initially caused some difficulties for players who were unaccustomed to the combination of a 3D environment and gesture control. The team attributed this to players being more accustomed to using 2D mouse controls where only the x- and y-axes are considered. The addition of the z-axis in the context of the full 3D gesture control interface caused issues for some players in establishing their position, moving backward or forward, turning around, or otherwise navigating the 3D environment.

    To improve the user experience, the team worked on reducing the learning curve for the controls. To do this, they implemented an in-game, step-by-step immersion process into the 3D control scheme. The first stages of the game are played using 2D controls, allowing players to become accustomed to the basic gesture controls before being introduced to the additional z-axis. These handholding stages proved useful in helping players master the controls and have fun with the game.

    Location-Based Services

    TheBestSync also enhanced the immersive, augmented reality of Finding BBB by recreating real-world weather and day and night cycles in the game. By integrating the OpenWeatherMap API, which connects to the game using JavaScript*, location-specific data on weather and the time of day is acquired for every player. These conditions are then reflected in the game universe in real time. If it’s raining, cloudy, or nighttime in the player’s vicinity, Finding BBB will show this in-game.


    Figure 4: Real-world atmospheric conditions such as rain are rendered in the game world.

    Voice Recognition

    Intel RealSense SDK voice recognition is used for the spelling tasks in the game when the player must say words out loud to complete missions. The game contains a number of preset words (for example, “NAME”) the letters of which are displayed jumbled up on-screen. The player then needs to work out what word the letters spell and say it out loud. The Intel RealSense SDK voice recognition capability then detects whether the player said the correct word.

    Players also have the option to set their own similar spelling tasks and share them with friends.


    Figure 5: Players complete spelling challenges using the voice-recognition functionality.

    Facial Recognition

    In Finding BBB, players take photos of themselves and map their images onto the bee character that they control in the game universe. TheBestSync views this integration of the player into the game world as a form of augmented reality, a field in which the company has many roots. The ultimate goal of Finding BBB is to make the player into the hero of the game.


    Figure 6: Facial recognition is used to map the player’s face onto their bee character.

    Social Functionality

    Because the social relationships between players are important to the game experience, Finding BBB uses the Facebook API, employing its social login and each player’s Facebook friends list. However, in China, where Facebook is blocked, the game uses the similar functionality of the WeChat* and Viber* social networks. Because of the inability to directly access Facebook locally, the team used a VPN in order to have a non-Chinese IP address to test the Facebook functionality of the game. Although this can result in a lower connection speed than the team would achieve if they were able to connect locally, using a VPN is a simple solution that can be deployed to address this issue.


    Figure 7: The Facebook* social login allows players to connect and share with friends online.

    Testing and Analysis Using the Intel® VTune™ Amplifier

    The testing of Finding BBB began at the prototype stage, and up to 40 app developers within the company gave their feedback to the core development team. The SDK research team at Intel also provided regular testing feedback, as did a game expert and an engineer from the Intel R&D team in Shanghai. TheBestSync used Intel® VTune Amplifier as an important tool to test and manage the system resource allocation of Finding BBB for optimum CPU and GPU performance. This included helping the game accurately judge when to activate any available multiple threads. Once the game had been developed beyond the prototype stage, the team invited friends from outside TheBestSync’s development department to participate in the testing.

    After testing and analysis, the team found that the huge 3D scenes they had initially created, which were rendered in real time, conflicted with the resource allocation that the Intel RealSense SDK required. This caused the game to suffer a dip in performance, which manifested as a frame-rate drop on the target Intel® Core™ i5 processor hardware to 15 frames per second (fps)—significantly below the target of 30 fps. To address the problem, the team modified the code to give a more reasonable resource allocation to the CPU and GPU. The result was noticeably better performance and a stable frame rate of 30 fps on a wide variety of devices, including those with lower specs.

    What’s Next for Finding BBB

    Finding BBB has achieved TheBestSync’s vision of a social interactive game that harnesses all the core capabilities of Intel RealSense technology. However, the team has identified areas where the game can be improved. Despite having already made significant changes to improve the learning curve for the controls, TheBestSync believes that navigating the 3D environment can be made even easier and more accessible for the player. The goal is that anyone, of any age, can start to play the game and have a rewarding experience with a minimal amount of coaching. This aspect of user-friendliness is what the team is continuing to enhance.

    Augmented reality and virtual reality are among TheBestSync’s areas of expertise, and the team believes that there is significant potential in combining Intel RealSense technology and virtual reality technologies such as Oculus Rift. By bringing the intuitive and natural control interface into the virtual reality arena, the company believes that it can create an experience that is immersive to a degree that has not yet been reached. The player will be able to use virtual reality to “feel” the game while using Intel RealSense technology to control it. Another potential application of this combination of technologies is the creation of navigable 3D-rendered videos that can be viewed and manipulated in 360 degrees.

    Since it began working with Intel RealSense technology two years ago, TheBestSync has already seen the development community’s landscape change considerably. When it entered the Intel Perceptual Computing Challenge with JOY, only a few developers were aware of Intel RealSense technology. Since then, the development community’s engagement with this technology has mushroomed. Interest in it has even spread to a number of universities that want to learn more about the technology’s capabilities and potential. As a result of this interest and the company’s own first-hand experience with the technology, there is a great deal of excitement at TheBestSync about the future potential of Intel RealSense technology.

    About the Developer

    Headed by CEO Alpha Lam, TheBestSync was founded in Guangzhou, China, in 2011. From the beginning, the company’s focus has been on developing products that exploit augmented reality and perceptual computing technologies to deliver innovative user experiences. Its products were initially designed for iOS* and Android* mobile platforms, later moving to Windows* platforms. Today, the company is investing heavily in developing interactive hardware and software that make innovative use of Intel RealSense technology, such as the RealSense “Fun Cap” claw machine, which was demoed during the 2014 Intel Developer Forum keynote. Its products are being designed for the coming wave of Intel RealSense technology-based devices, which will include PCs, laptops, tablets, and the Windows Phone*.

    TheBestSync has also established relationships with a number of OEMs, including Lenovo, Haier, Acer, HP, and Dell, with the goal of having its apps preloaded onto Windows* devices. The company previously created an augmented reality-based app that was bundled with the Lenovo X1 Carbon* at launch. BBB GOAL, its more recent app, is among those that will be preloaded onto forthcoming Intel RealSense technology-equipped devices, something that the company hopes to repeat with Finding BBB.

    Always looking to innovate, TheBestSync is currently advancing its use of social networks such as Facebook and WeChat (in China) as app payment platforms, while also investigating the role of the Internet of Things in the experiences that it is developing. The company plans to launch Finding BBB in the spring of 2015 for Windows 8 devices with the integrated Intel RealSense 3D camera. TheBestSync will take a two-fold approach in marketing the game, working with OEMs to have the app preinstalled on Intel RealSense technology-based devices and marketing the app itself in the Windows Store.

    Helpful Resources

  • Finding BBB
  • TheBestSync
  • Intel RealSense
  • Intel RealSense SDK
  • Gesture Recognition
  • face tracking
  • voice
  • 开发人员
  • Microsoft Windows* 8
  • 游戏开发
  • 英特尔® 实感™ 技术
  • 用户体验
  • Windows*
  • 英特尔® 实感™ SDK
  • 英特尔® 实感™ 技术
  • 游戏开发
  • 用户体验与设计
  • 前置 F200 照相机
  • 平板电脑
  • URL
  • 主题专区: 

    RealSense
  • 英特尔® 实感™ 技术
  • Introduzione allo sviluppo di applicazioni mobile cross-platform

    $
    0
    0

    Lo sapevi che si possono sviluppare applicazioni mobile cross-platform di elevata qualità utilizzando solo HTML, CSS e JavaScript ?

    Ok, domanda banale, molti oramai lo sanno ma tantissimi developers non ne sono ancora al corrente o, sono ancora scettici o comunque non ancora del tutto convinti riguardo l’uso di queste tecnologie in ambito mobile; proprio per quest’ultimo motivo ho deciso di scrivere una serie di blog posts ed alcuni articoli tecnici che verranno linkati man mano in calce a questo blog post per, mostrarti come con l’uso delle tue skills in ambito web e, con l’IDE Intel XDK si possano sviluppare applicazioni mobile di elevata qualità in maniera veloce ed infine estremamente efficiente.

    Ai fini di questa serie di posts è importante comprendere che le applicazioni eseguite su un dispositivo mobile sono essenzialmente di tre tipi:

    • Mobile Web Apps
      • Questa tipologia di apps sono sostanzilamente dei siti web progettati per funzionare come se fossero delle mobile apps ma, sono eseguite in un web browser sul dispositivo.
         
    • Native Apps
      • Questa tipologia di apps è sviluppata con uno specifico linguaggio di programmazione come ad esempio Objective-C in ambito iOS, Java in Android o BackBerry oppure per esempio C# in Windows Phone. Le cosiddette app native a differenza delle mobile web app possono accedere a tutte le funzionalità del dispositivo e del sistema operativo e, possono essere di vario genere, si va dalla semplice app di utilità fino ad arrivare a giochi 3D molto complessi.
         
    • Hybride Native Apps
      • Questa tipologia di apps caratterizzerà questa serie di blog posts, sono sviluppate con HTML ma a differenza delle web apps vengono eseguite in un contenitore nativo. Le Hybride Native Apps possono accedere a tante funzionalità dei dispositivi e del sistema operativo come ad esempio la fotocamera, il giroscopio, l’accelerometro etc. etc.

    Se arrivati a questo punto ti stai ancora chiedendo qual è l’idea che rende vincente lo sviluppo di app Hybride, le risposte potrebbero essere varie:

    1. Risparmio in termini di risorse umane. (lato azienda)
    2. Facilità di apprendimento delle tecnologie. (lato sviluppatore)
    3. Costi legati allo sviluppo ed ai test estremamente bassi e veloci.
    4. Sviluppo di una app per tutti gli OS.
    5. L’app sviluppata può essere venduta negli stores come accade per un'app nativa.

    Se sei uno sviluppatore web sei sicuramente già in possesso delle skills necessarie per mettere in piedi un’applicazione mobile, oppure, più in generale, se sai come sviluppare un sito web sarai sicuramente in grado di sviluppare una mobile app, vediamo subito di cosa hai bisogno per iniziare a sviluppare applicazioni Hybrid e per seguire questa serie di posts:

    Intel XDK -> https://software.intel.com/it-it/html5/tools
    Documentazione Intel XDK -> https://software.intel.com/en-us/xdk/docs/intel-xdk-overview
    Documentazione Bootstrap (Il framework è già integrato in XDK) -> http://getbootstrap.com/getting-started/

    Fabrizio Lapiello
    Intel Software Innovator

    图标图像: 

  • 引用
  • 技术文章
  • 用户指南
  • 开发工具
  • 游戏开发
  • 移动性
  • 开源
  • 优化
  • 移植
  • 用户体验与设计
  • 英特尔 XDK
  • 英特尔® XDK
  • HTML5
  • JavaScript*
  • 安卓*
  • 企业客户端
  • 游戏开发
  • HTML5
  • 用户体验
  • 电话
  • 平板电脑
  • 开发人员
  • 合作伙伴
  • 教授
  • 学生
  • 安卓*
  • Apple iOS*
  • Microsoft Windows* 8
  • Tizen*
  • 主题专区: 

    IDZone

    包括在 RSS 中: 

    1
  • 入门级
  • The Path from Intel® Perceptual Computing SDK to Intel® RealSense™ SDK in Unity

    $
    0
    0

    Download PDF

    Code-Monkeys engaged with Intel’s 3D sensing initiative back in 2012 when we first saw the (rather raw) technology at the Intel® Developer Forum that year. Back then, it was called “Intel Perceptual Computing,” and it used a Creative* camera. Since then, the initiative has been upgraded—a lot—and renamed Intel® RealSense™ technology.

    This article is intended for developers who started with Intel Perceptual Computing, particularly those who use Unity, and want to migrate or upgrade to Intel RealSense technology.

    The Short Story: Start Over, but in a good way

    The changes that have been made are huge and occur at every level. The hardware profile is totally different; the function calls are different. In fact just about everything, all the way down to the root conceptual framework, has changed. With all that water under the bridge, the short story is that there is no real ”migration” path between the two SDKs. But don’t take that to mean all of your time learning Intel Perceptual Computing was wasted as you’ll see below.

    To address what’s involved in migrating your code, here goes:

    1. Completely remove the Intel Perceptual Computing SDK. Completely means completely. The two SDKs do not play well together and having both installed on the same rig can cause intermittent and maddening conflicts.
    2. Say goodbye to your Creative IR Camera. It was good while it lasted, but the Creative hardware is not compatible with the Intel RealSense SDK.
    3. Install the new Intel RealSense SDK. You can find it here: https://software.intel.com/en-us/realsense/
    4. Go back to the studs of your Intel Perceptual Computing project and get ready to rework all of the interface and control pieces.

    Now while you’ll need to remap controls to actions the good news is this: the new SDK is quite a bit easier to use and in most cases the real challenge in implementing this kind of interface isn’t in the code syntax. It’s in the way you think about your interface from the ground up. If you thought through all the intricacies of a Natural UI previously, it’s highly likely everything will carry over between SDKs.

    Intel Perceptual Computing SDK in Hindsight

    We spent a lot of time working with the Intel Perceptual Computing SDK, and it’s worth going back over what worked and what didn’t. After all, we learned a lot of lessons on this platform. Overall, the power of the Intel Perceptual Computing SDK was in its transparency. We had full and easy access to all of the data that was coming from the Creative camera, and that was a LOT of data.

    One of the first challenges we faced while using this SDK was how to filter out what we didn’t need, and while that was a genuine speed bump, the SDK allowed us to make the tactical decisions about what was important, how to focus on the data we needed, and how to interpret what we got. It was raw and choppy and there was a ton of it, but we had options, which is to say we had power.

    But this empowering reality also made Intel Perceptual Computing overwhelming at first. There was limited documentation, no best practices, and almost everything had to be processed by custom scripts. The result was a need to trim back our initial scope to accommodate the steep learning curve.

    Overall, Intel Perceptual Computing reminded me of UNIX*. The people who know it rave about the available power, the options, and the limitless ways to display their mad command line skills. But for the less committed user, it can seem needlessly difficult for 95% of a user’s daily work.

    Comparing Intel® Perceptual Computing to Intel RealSense™ Technology

    The first thing an Intel Perceptual Computing veteran will notice about the Intel RealSense SDK is how much more WYSIWYG it is. Intel RealSense technology, specifically the Unity plug-in, includes a variety of panels and modifiers that make drag-and-drop functionality a genuine reality. Event-based logic means you can rely on the SDK’s built-in functionality to trigger actions that previously had to be sensed, listened for, and linked by hand.

    An excellent example of this is the little-publicized Emotion Tracking functionality. Right out of the box, with a trivial amount of code, the system can look for and “detect” a variety of emotions and sentiments defined by certain facial expressions. A clever developer should be able to seamlessly integrate these user-generated events with context sensitive actions and this entire package can be installed in just a few minutes.

    Of course, a more “user-friendly” system comes at the cost of granular control. Developers have a lot less access to raw data in the Intel RealSense SDK and customizing processing algorithms is no longer a simple matter.

    In the end, though, the Intel RealSense SDK is a major improvement over Intel Perceptual Computing at basically every level. And while the nerdcore coder in us miss the unfettered data stream, the deadline-oriented coder is grateful for the improved level of accessibility and productivity.

    A Real-world Example: Making a Mouse with the Intel RealSense SDK

    In the following example we use Unity 4.5 and NGUI with the Gold version of the Intel RealSense SDK to demonstrate how easy it is to make a “mouse” object. The process follows this series of steps:

    1. Create a 3D UI in NGUI with an anchor and panel.
    2. Add a GameObject to the panel called RSMouseLocation.
    3. Add a Tracking Action script from the Intel RealSense SDK.
    4. Add an Icon for visual feedback.
    5. Set up a UI Camera.
    6. Add a Right Hand Grab Trigger from the Intel RealSense SDK.
    7. Create an RSMouseManager to listen for the Grab Trigger Event.
    using UnityEngine;
    using System.Collections;
    using System.Collections.Generic;
    
    public class RSMouseManager : MonoBehaviour {
    
    	public Camera myCamera;
    	public bool handClosed;
    	public float swipeDistance = 100.0f;
    	public Vector3 currentMousePosition;
    
    	public UISprite myHandIconSprite;
    	public string openHandSpriteName;
    	public string closeHandSpriteName;
    	public List<string> levelsToShowHandIcon;
    	public bool showHandIconOnPause;
    
    	private Vector3 inputStartLocation;
    	private bool foundSwipe;
    
    	void OnLevelWasLoaded(int level)
    	{
    		ResolveHandIconVisibility(true);
    	}
    
    	void OnEnable()
    	{
    		PauseManager.timeSwitch += ResolveHandIconVisibility;
    	}
    
    	void OnDisable()
    	{
    		PauseManager.timeSwitch -= ResolveHandIconVisibility;
    	}
    
    	void ResolveHandIconVisibility(bool unpaused)
    	{
    		if(!unpaused && showHandIconOnPause)
    		{
    			myHandIconSprite.gameObject.SetActive(true);
    		}
    		else
    		{
    			if(levelsToShowHandIcon.Contains(Application.loadedLevelName))
    				myHandIconSprite.gameObject.SetActive(true);
    			else
    				myHandIconSprite.gameObject.SetActive(false);
    		}
    	}
    
    	// Use this for initialization
    	void Start () {
    		handClosed = false;
    	}
    
    	// Update is called once per frame
    	void Update () {
    		currentMousePosition = myCamera.WorldToScreenPoint(this.transform.position);
    		OperateRSMouseInput();
    	}
    
    	public void OpenHand(){
    		if(handClosed){
    			//Debug.Log ("Hand open. ");
    			if (!foundSwipe) {
    				GameInput.instance.AcceptExternalClick(inputStartLocation);
    			}
    			handClosed = false;
    			foundSwipe = false;
    			if(myHandIconSprite != null && openHandSpriteName != "")
    				myHandIconSprite.spriteName = openHandSpriteName;
    		}
    	}
    
    	public void ClosedHand(){
    		if(!handClosed){
    			//Debug.Log ("Hand closed.");
    			//Vector3 screenPoint = myCamera.WorldToScreenPoint(this.transform.position);
    			//GameInput.instance.AcceptExternalClick(screenPoint);
    			handClosed = true;
    			inputStartLocation = currentMousePosition;
    			if(myHandIconSprite != null && closeHandSpriteName != "")
    				myHandIconSprite.spriteName = closeHandSpriteName;
    		}
    	}
    
    	void OperateRSMouseInput() {
    		if (handClosed) {
    			// Determine if a swipe is occuring
    			if (Mathf.Abs(inputStartLocation.x - currentMousePosition.x) > swipeDistance && !foundSwipe) {
    				foundSwipe = true;
    				if (inputStartLocation.x > currentMousePosition.x) {
    					GameInput.instance.AcceptExternalSwipeLeft();
    				} else {
    					GameInput.instance.AcceptExternalSwipeRight();
    				}
    			}
    			if (Mathf.Abs(inputStartLocation.y - currentMousePosition.y) > swipeDistance && !foundSwipe) {
    				foundSwipe = true;
    				if (inputStartLocation.y > currentMousePosition.y) {
    					GameInput.instance.AcceptExternalSwipeDown();
    				} else {
    					GameInput.instance.AcceptExternalSwipeUp();
    				}
    			}
    		}
    	}
    }

    Given that, here’s what it looks like in the Unity panel view:

    Unity Panel View

    Intel® Perceptual Computing APIIntel® RealSense™
    QueryVoiceRecognizedQuery Voice Recognized
    pp.QueryGeoNode(PXCMGesture.GeoNode.Label.
    LABEL_BODY_HAND_LEFT,out leftHand)
    pp.QueryGeoNode(PXCMGesture.GeoNode.Label.
    LABEL_BODY_HAND_RIGHT,out leftHand)
    Query Geo Node
    pp.QueryFaceLocationData(faceId,out ddata)Query Face Location
    pp.QueryGesture(PXCMGesture.GeoNode.Label.
    LABEL_ANY, out gdata)
    Query Gesture

    Conclusion

    While there is no 1:1 path to migrate an application from Intel Perceptual Computing to Intel RealSense technology, developers who know the former will be encouraged to see how far the latter has come in just a single year. And development is continuing at a healthy pace. With laptops, 2 in 1s, and All-in-Ones with integrated Intel RealSense 3d cameras soon to appear in the market it’s a great time to give the technology a try and write applications and games now, while the app store space is wide open to clever, early adopting developers.

    About the Author

    Chris Skaggs is a 15 year veteran of the web and mobile software industry. The founder and CEO of both Code-Monkeys and Soma Games LLC, Chris has delivered software applications to some of the country’s most discerning clients like Intel, Four Seasons, Comcast, MGM and Aruba Networks. In addition to corporate customers, Code-Monkeys and Soma Games have programmed many casual and mid-core games for iPhone, iPad, Android and Mac/PC platforms. A Black Belt in Intel’s Software Developer Network, Chris also writes and speaks on topics surrounding the rapidly changing mobile application environment at venues like GDC Next, CGDC, Casual Connect, TechStart, Serious Play, and AppUp Elements.

  • unity
  • RealSense SDK
  • RealSense
  • Intel RealSense
  • 开发人员
  • Microsoft Windows* 8
  • 英特尔® 实感™ 技术
  • Windows*
  • C#
  • Unity
  • 入门级
  • 英特尔® 实感™ SDK
  • 英特尔® 实感™ 技术
  • 笔记本电脑
  • 平板电脑
  • URL
  • 主题专区: 

    RealSense
  • 英特尔® 实感™ 技术
  • Intel® RealSense™ SDK Code Samples

    $
    0
    0

    Download PDF

    Download SDK Code Sample

     

    Intel® RealSense™ Logo

     

    Abstract

    This set of code samples was created to be used during the Brazilian Intel RealSense Hands-on Labs to make it easier for the participants to understand how to use the Intel® RealSense™ SDK. The 12 samples use the C# SDK wrapper and provide simple, console-based apps that print the information available from the RealSense modalities including face and hand tracking and speech recognition. Also, there are 2 WPF apps showing how to display the camera streams and how to achieve background subtraction . 

    Introduction

    As part of preparation for the Hands-On Labs Brazil, we created 12 code samples with instructions to show how to leverage Intel RealSense voice and camera capabilities with simple examples. The code is commented (in English) and can be freely shared with the worldwide developers’ community.

    The samples were implemented using C# and are basically simple console applications that show how to use the RealSense SDK functionalities.  The code has been tested with the Intel RealSense SDK R2 (RSSDK).

    We hope you enjoy our contribution and if you have any questions or need help, please use the comments section below.

    Pre-Requisites to Run the Samples

    Important Intel RealSense Documentation Links

    Available Samples

    Camera Calibration Library

    Camera Calibration is a library project that receives a device and a modality and makes the proper calibration to improve the quality of the  camera recognition for that specific mode, for example hand tracking.

    Reference Links:

    Device

    The Device sample makes device enumeration possible. Select a device and get the available streams and set device configurations. Note: this sample uses the Camera Calibration Library to configure the devices.

    Reference Links:

    Emotion 

    The Emotion sample lists emotions using SenseManager with a procedural implementation. This sample finds all the emotion data and prints each one along with its intensity value.

    Reference Links:

    Emotion with Callback

    The Emotion with Callback sample has the same functionality as the Emotion sample, but with a different implementation. It shows how to use handlers in the RSSDK to get module data. It uses the Emotion module, but can be implementated with other modules.

    Reference Links:

    Face

    Face is a sample that implements some of the various functionalities of the Face module. It uses the PXCMFaceData object and processes information separately as listed in the functionalities below:

    Reference Links:

    Face Recognition

    The Face Recognition sample detects a face and checks if the user is already registered. When the program detects a face that is not registered, the user can press the space bar to register their face in the database (in memory).. After registration, the sample prints the unique identifier of the recognized face.

    Reference Links:

    Hands

    The Hands sample tracks hands, fingers and gestures. The sample prints how many hands are detected and their positions (image and world), body sides, joints and detected gestures.

    Reference Links:

    Object Tracking

    The Object Tracking sample detects a 2D object using the Intel RealSense  SDK. It uses JPEG/PNG markers, and tracks X, Y, and Z positions as the object is tracked by the camera. Note: This sample requires that the camera is calibrated with a specific tag.

    Reference Links:

    Segmentation

    Segmentation

    The Segmentation sample uses the WPF structure to display the camera stream on a WPF form and uses the Segmentation feature to remove the image background.

    Reference Links:

    Speech Recognition

    The Speech Recognition sample shows how to use both Speech Recognition modes: DICTATION or COMMAND. In dictation mode, it recognizes all words that users are saying and prints them on the screen. In Command mode, the program sets a standard dictionary and when the user says one of added commands, it prints it on the screen.

    Speech Synthesis

    The Speech Synthesis sample is an implementation of the text-to-speech capability of the Intel RealSense SDK. When a sentence is made available in the profile, it converts the sentence to audio and plays it.

    Reference Links:

    Streams

    IR stream view (notice the effect from the outside lighting in the background.)

    Streams

    The Streams samples uses the RSSDK to display the Creative Camera streams (Color, Depth and Infrared) into a WPF form. The sample selects a stream by its type and shows a window with the selected camera stream, updating the image frame-by-frame in the selected FPS configuration.  

    Reference Links:

    Download the Samples

    To experiment with these samples and learn more about how use the Intel RealSense SDK, please download the code from here. .

    About Intel® RealSense™ Technology

    To get started and learn more about the Intel RealSense SDK for Windows, go to https://software.intel.com/en-us/realsense/intel-realsense-sdk-for-windows.

    About the Authors

    João is a Software Analyst Intern in the Developers Relations Division Brazil. He is studying Information Systems at University of São Paulo and is a Software Developer working mainly with mobile platforms, web applications and RealSense.

    Felipe is an Intel RealSense Technical Evangelist in the Developers Relations Division Brazil. He studied Computer Engineering and worked with different technologies, platforms and programming languages during his career.  His main interests are game development, mobile platforms and HTML5.

  • Intel® RealSense™ Technology
  • Intel® RealSense™
  • Intel RealSense
  • RealSense SDK
  • face tracking
  • camera calibration
  • Emotion Detection
  • Gesture Recognition
  • object tracking
  • user segmentation
  • speech recognition
  • hands-on lab
  • Brazil
  • 开发人员
  • Microsoft Windows* 8
  • 英特尔® 实感™ 技术
  • Windows*
  • C#
  • 中级
  • 英特尔® 实感™ SDK
  • 英特尔® 实感™ 技术
  • 前置 F200 照相机
  • 服务器
  • 平板电脑
  • URL
  • 主题专区: 

    RealSense
  • 英特尔® 实感™ 技术
  • 合著者: 

    JOAO PEDRO NARDARI (Intel)

    Code Sample: Facial Recognition Using Intel® RealSense™ SDK

    $
    0
    0

    Abstract

    Download Code Sample.zip
    Download Face Recognition Article.pdf

    This code sample uses the Intel® RealSense™ SDK for Windows* to demonstrate some of the facial recognition capabilities of the Intel® RealSense™ user-facing camera. The SDK provides several algorithms for detecting the user’s face, facial landmark point features, head pose (roll, pitch and yaw orientation), and facial expressions.

    The SDK also includes algorithms for comparing the user’s face with a set of reference images stored in a recognition database to determine the user’s identification. This feature has many potential applications in immersive gaming, security, assistive technologies, and other compelling use cases.

    This code sample explores the following facial recognition actions:

    • Detect the number of faces in view of the camera
    • Register and unregister users
    • Capture an image of the user upon registration
    • Create, read, update, and delete a recognition database

    Introduction

    The purpose of this app is to demonstrate the basics of using the face recognition capabilities of the SDK and is not intended to perform a specific task or solve a defined problem. However, by studying this example, you can decide what novel usages you can unlock by using the Intel RealSense SDK in your applications.


    Figure 1. Face Recognition User Interface

    Main Screen

    The sample app displays the camera’s color stream in a WPF Image control as shown in Figure 1. When the app is first launched a database file will not be present, so the User ID indicates “Unrecognized” on the UI. The total number of faces within the field of view of the camera will be shown, but the app intentionally limits its attempts to only recognizing the first face to come into view.

    Note: Recognition of multiple faces within the camera’s field of view is possible if you are developing an application that requires this feature. This is accomplished by first calling the QueryNumberOfDetectedFaces() method and then looping over calls to QueryFaceByIndex(0..n-1) to acquire and act on multiple Face instances.

    Face Tracking Indicators

    By default, the sample app shows a rectangular face marker that tracks the user’s face, but this can be hidden by unchecking the Show face marker checkbox. The tracking rectangle scales in size to the user’s face as he or she moves toward or away from the camera. The face marker will disappear, and the border surrounding the Image control will turn red, when the user moves out of range of the camera (Figure 2).


    Figure 2. User Out of Camera View

    Registering a User

    Clicking the Register User button adds the user’s image to the recognition database in memory. A unique identification number is automatically assigned by the SDK, which is displayed on the screen. The ID number will also be displayed above the tracking rectangle if the face marker is activated (Figure 3).


    Figure 3. User ID Registered

    When an unrecognized face is registered, the app also saves a snapshot in a file named “image.jpg” that can be opened in any image viewer. This feature is included in this sample app to help with testing (Figure 4).


    Figure 4. Image Capture for Testing and Experimentation

    At this point a recognition database is resident in memory but not yet committed to a file on the computer’s hard drive. When the user clicks Save Database, a file named “database.bin” is created in the output folder. It is possible to save and open multiple recognition database files with any valid filename, but this sample app uses a hardcoded filename for the sake of simplicity.

    Unregistering a User

    A recognized user can be removed from the local database in memory by clicking the Unregister User button. This action does not commit the change to nonvolatile disk memory; to do this, the user must click the Save Database button again.

    The user can also delete the database file from the computer’s hard drive by clicking the Delete Database button. This action removes the file from the hard drive but has no effect on the recognition database running in memory. When the app is restarted, the UI will indicate “Database: Deleted” and the recognition database will have to be rebuilt.

    Note: The reason for these discrete steps in the sample app is to separate the various interactions into functional blocks that are easy to follow in the code. As previously mentioned, the app is not intended to perform any specific task, but instead show the basics of using the SDK for facial recognition and simple database handling.

    Code Development Details

    Development Environment

    The sample app can be built using Microsoft Visual Studio* Express 2013 for Windows Desktop or the professional versions of Visual Studio 2013.

    Prerequisites

    You should have some knowledge of C#, WPF, and know some of the basic operations in Visual Studio like building an executable, etc. Your system needs a front-facing 3D depth camera compatible with the Intel RealSense SDK for the example code to work correctly.

    Code Details

    A number of private objects and member variables with global scope are declared at the beginning of the MainWindow class. These objects are instantiated and variables initialized in the MainWindow constructor.

    A method named ConfigureRealSense() is called to instantiate a SenseManager object and enable the color stream, face module, 3D face tracking, and facial recognition. The code to create a recognition database is also part of this method.

    A worker thread is spawned in which the acquire/release frame loop runs. As described in the Intel RealSense SDK Reference Manual, the SenseManager interface can be used in one of two ways: either by procedural calls or by event callbacks. This sample app uses procedural calls as its interfacing technique.

    The Window_Closing() event handler is raised when the user closes the application by clicking the X button in the upper right-hand side of the main window. The Window_Closing() and Exit button event handlers both call a common method named ReleaseResources() that performs the necessary memory cleanup before the app closes.

    User interface updates are performed in a method named UpdateUI(), which is called from within the acquire/release frame loop. UpdateUI() uses a Dispatcher.Invoke method to perform operations that will be executed on the UI thread. These operations include displaying the color stream via a WPF Image control and displaying status messages.

    Database Details

    Saving the Database

    When a user clicks the Save Database button, a Byte array is declared and dimensioned to the size of the database, which is returned by calling the QueryDatabaseSize() method. The array is then populated by passing it to the QueryDatabaseBuffer() method. Finally, the database is committed to nonvolatile disk memory by simply calling File.WriteAllBytes().

    Note: The SDK documentation encourages software developers to apply industry standard encryption to protect privacy; however, file encryption techniques go beyond the scope of this introductory code sample.

    Loading the Database

    When the app starts, it tries to open the database file. If a file is found, its contents are read into a Byte array that gets passed to the SetDatabaseBuffer() method. The image-related content contained in the database is then used by the app for subsequent attempts at user recognition (until a new database is saved).

    Database Portability

    For this sample app, database files were tested for portability between different computers using both the integrated Intel RealSense camera and the external developer model. The results of this testing were favorable, but developers should be aware of potential differences in camera parameter settings (e.g., power, etc.) that may affect the performance of facial recognition from one camera to the next.

    Check It Out

    Download the app and experiment with it to learn more about how facial recognition works in the Intel RealSense SDK for Windows. 

    About Intel® RealSense™ Technology

    To get started and learn more about the Intel RealSense SDK for Windows, go to https://software.intel.com/en-us/intel-realsense-sdk

    About the Author

    Bryan Brown is a software applications engineer in the Developer Relations Division at Intel.

  • Intel® RealSense™ Technology
  • Intel® RealSense™
  • Intel RealSense
  • RealSense SDK
  • Face Recognition
  • facial recognition
  • 开发人员
  • Microsoft Windows* 8
  • 英特尔® 实感™ 技术
  • Windows*
  • C#
  • 中级
  • 英特尔® 实感™ SDK
  • 英特尔® 实感™ 技术
  • 笔记本电脑
  • 平板电脑
  • URL
  • 主题专区: 

    RealSense
  • 英特尔® 实感™ 技术
  • Cómo desarrollar aplicaciones Android* nativas con IDE Integration de Intel® INDE 2015 y Visual Studio*

    $
    0
    0

    Introducción

    Este artículo sirve de guía para escribir una aplicación Android* nativa de nombre “Hello World” por medio de la funcionalidad IDE Integration (integración de IDE) de Intel® INDE 2015.

    Acerca de Intel® INDE

    La Experiencia Integrada de Desarrollo Nativo de Intel® (Intel® INDE) es un conjunto de bibliotecas y herramientas de productividad para C++ y Java que acelera el desarrollo de aplicaciones para móviles y PC mediante la reutilización de código, para lograr código nativo sensible al rendimiento y soporte de flujo de trabajo integrado. Intel INDE hace posible crear aplicaciones Windows en la Arquitectura Intel® y aplicaciones Android en ARM y la Arquitectura Intel®. Los desarrolladores tienen la libertad de usar Intel INDE dentro del entorno de desarrollo integrado (IDE) que prefieran, incluidos Microsoft Visual Studio*, Google Android Studio* y Eclipse*. Intel® INDE también brinda a los desarrolladores acceso a funcionalidades de plataforma avanzadas, tales como aceleración de medios, detección de contexto, OpenCL™ 2.0 y bibliotecas de subprocesos, con un grupo selecto de compiladores y herramientas de análisis y depuración. Intel® INDE viene en tres ediciones: Starter, Professional y Ultimate. Se puede encontrar más información en el blog de anuncio de Intel INDE, dentro de la Zona Intel® de Desarrolladores.

    Acerca de la integración con Visual Studio* en Intel® INDE 2015

    Intel® INDE 2015 ha integrado el complemento vs-android disponible para Visual Studio* con una plantilla especial llamada “Android X86 Native Project” en Visual C++. INDE 2015 viene también con Debugger Extension para vs-android, con el fin de ayudar a depurar las aplicaciones. Veamos cómo compilar e implementar una aplicación nativa de ejemplo por medio de esta funcionalidad.

    Prerrequisitos:

    Microsoft Visual Studio* 2012 o 2013 (ediciones Professional o Ultimate). No se admiten las ediciones Express.

    Versión de 32 bits de JDK 7 o superior.

    Configuración de INDE 2015:

    Descargue Intel® INDE 2015 e inicie la instalación. IDE Integration se incluye en todas las ediciones del producto. Escoja la versión que prefiera instalar y se abrirá la pantalla correspondiente a su opción de integración de IDE. Seleccione el entorno de desarrollo Microsoft Visual Studio* y continúe con la instalación.

    Se descargarán e instalarán todas las herramientas necesarias, incluidas: SDK de Android*, NDK, ANT, complementos de ADT, vs-android, etc.

    Cómo crear su primera aplicación Android* nativa con INDE 2015:

    Abra Visual Studio* y haga clic en FILE -> New -> Project.

    Se inicia el asistente para nuevos proyectos. En Installed -> Visual C++ -> Store Apps -> Android, verá la plantilla “Android X86 Native Project”. Cambie el nombre del proyecto a “Hello World”.

    Se inicia el asistente de proyectos para Android* de Intel X86 Native Development Experience, que permite elegir la configuración del proyecto. Elija la misma configuración de API que está establecida en el emulador o el dispositivo de destino.

    Seleccione los valores predeterminados en la página siguiente “Activity Settings” y termine.

    Ahora verá el archivo solución compilado en Visual Studio*.

    Haga clic con el botón derecho en el archivo solución “Hello World1” y luego haga clic en Properties. Las propiedades importantes ya están resaltadas. Escoja el nivel de API correcto, que debería coincidir con el emulador que vaya a ejecutar. Tiene la opción de escoger un destino ARM desde la arquitectura de destino. De forma predeterminada, se selecciona la arquitectura x86. El conjunto de herramientas para plataforma se selecciona como x86-4.6, que apunta a GCC. También puede elegir ICC. 

    Repasemos rápidamente algunos archivos importantes del explorador de soluciones Solution Explorer.

    Jni/NativeCode.cpp tiene el código nativo C++ que puede acceder a todos los encabezados y las bibliotecas nativas.

    Abra res/layout/activity_main.xml desde el explorador de soluciones. Este archivo define el diseño de interfaz de usuario de su aplicación. Observe que todavía no hay vista de diseño para esto.

    Abra src/MainActivity.java desde el explorador de soluciones. Este archivo define los controladores de eventos para su aplicación y llama al método nativo desde aquí. Por ejemplo, el método “getStringFromNative()” es la llamada de interfaz a la función definida en jni/NativeCode.cpp.

    Antes de compilar e implementar este ejemplo, debe iniciar el emulador. Para ello, vaya a <inde_install_directory>\INDE\IDEintegration\SDK e inicie AVD Manager.exe. Inicie el emulador Intel_Nexus_7_x86 que viene de manera predeterminada con IDE Integration. Asegúrese de que el nivel de API sea el mismo que eligió en el asistente para proyectos de Visual Studio*.

    Ahora, para compilar e implementar, elija BUILD -> Build Solution desde Visual Studio*. Cuando la compilación haya sido exitosa, elija BUILD -> Deploy Solution. Debería ver la aplicación “HelloWorld” instalada en el emulador.

    Haga clic en la aplicación “HelloWorld” y debería poder ver su primera aplicación en funcionamiento.

    ¡Felicitaciones por este primer paso tan importante!

    Consejos para resolver problemas

    • Para acelerar el emulador, instale Intel® HAXM. Tenga presente que debe activar Intel® VT en su BIOS y desinstalar Hyper-V si se encuentra en su máquina.
    • Asegúrese de que JAVA_HOME esté configurado con el JDK de 32 bits más reciente que haya instalado.
    • Es posible que vea esto “Error occurred during initialization of VM, Could not reserve enough space for object heap, Could not create the Java virtual machine ("Se produjo un error durante la inicialización de VM. No se pudo reservar espacio suficiente para el montón de objeto. No se pudo crear la máquina virtual de Java"). La corrección será aumentar el tamaño de montón MAX mediante el agregado de –Xmx512M (podría ser cualquier número grande) a la variable de entorno _JAVA_OPTIONS

    Se podrán agregar más consejos para resolver problemas si deja comentarios en este artículo.

    Si busca más ayuda, visite la página INDE Support.

  • Native Android* Apps
  • Visual Studio*
  • Android* Development Tools
  • 开发人员
  • 合作伙伴
  • 教授
  • 学生
  • Android*
  • 安卓*
  • .NET*
  • C/C++
  • Java*
  • 入门级
  • 中级
  • 英特尔® Integrated Native Developer Experience
  • 平板电脑
  • URL
  • 主题专区: 

    IDZone

    CalPoly Pre-GDC XDK Gamejam*

    $
    0
    0

    *Side note for the uninitiated: hackathon is an overarching term for any development ‘jam session’ – codefest refers to a software-creating subset of hackathons – gamejams are codefests where the projects are games.

    In the narrow time frame of 24 hours, 37 students from CalPoly made some of the best games I’ve seen come out of a hackathon.  Needing less guidance than most, these highly-motivated participants flexed their game dev muscles and learned new tools; using the Intel XDK they were able to play their games on mobile devices almost immediately as well as allowing us to demo their games at the Game Developer Conference just a few days later. 

    At most of our student hackathons we have a larger number of mentors, providing constant assistance and tutorials.  The grasp of design concepts (such as proper scope for the duration, which many people have difficulty understanding) was a testament to the quality of the CalPoly Game Development club and the teaching of Foaad Khosmood, President of Global Game Jam (and former Intel employee).  This time, after the initial talk about HTML5/Javascript game development using the XDK, the students got moving fast.  The only Intel representatives aside from myself were Peter Morgan (who took most of the pictures) and Rakshith Krishnappa, XDK expert.

    Games Made at the Jam

    The games are all viewable, downloadable, and playable at http://users.csc.calpoly.edu/~foaad/IntelXDKJam/


    Juicy Pong

    by Thomas Steinke, Elliot Fiske, & Tyler Mau

    Juicy Pong is a game that takes the simple idea of Pong and makes it more fun than Fro-Yo.


    Chalk Block

    by Peter Godkin and Joel Anton

    Prevent the bugs from reaching you by blocking their path with colored lines that match their colors!


    Buzzword Bingo

    by Noah Negrey and Brian Quezada

    Buzzword Bingo allows you to play bingo with the latest tech buzzwords, by creating your own boards that utilize buzzwords and even allowing you to take photos of the words you find.


    Power Towers

    by Cody Kitchener

    Get power to the towers or they will be able to protect you


    Meditation Bump

    by Phyllis Douglas, Andrew Wang, Andrew Elliott, Paul Fallon

    Bump away your worldly desires as you try and meditate!


    Survive the Hole Thing

    by Sean Slater and Mitchell Miller

    Move a black hole around to keep asteroids from hitting your broken down spaceship, but watch out, if to many asteroids hit the ship, then it is game over.


    Bit Jumper

    by Sean Troehler and Kevin Nelson

    A sentient program must keep jumping and jumping to escape the task manager.


    Rythm (intentional spelling)

    by Alanna Buss and Kyle Piddington

    A rhythm game that has notes on the left or right. Timing currently goes from Miss, Good, and Perfect.


    Space Hults

    by Ethan Nakashima, Simon Vurens, Andrew Acosta

    Hurriedly throw your spaceship together and survive the asteroid field as long as possible!


    Ship!

    by Cameron Olson and Daniel Kauffman

    Fling asteroids to destroy a nimble spaceship


    Below is the first-hand account of Intel’s Peter Morgan, photographer of the event.

    Cal Poly Computer Science student games featured in the Intel area at GDC.

    By Peter Morgan, Intel Corporation

     

    Intel Corporation sponsored a two day student game hackathon February 27 and 28 at Cal Poly San Luis Obispo, California. The challenge at hand was to develop a working computer game in just twenty-four hours. This game jam was hosted by Cal Poly Game Development Club (CPGD) and supervised by club Advisors/professor Foaad Khosmood. Participants developed their games in JavaScript using The Intel Cross Platform Design Kit (XDK). Professor Khosmood said, “The students spend about eighty percent of the development time in the XDK. They know to create a small portion of code and then use the XDK to test it before investing more time. The XDK allows them to create applications testing it all the way through the development process to know that the application will work properly across multiple device form factors once completed.

    Brad Hill of Intel Corporation’s Developer Relations Division was the guest of honor managing the event and mentoring the students along with Professor Khosmood.

    It all started Friday night in the Advanced Technology Lab presentation room where seventy-five male and female upper division computer science students gather to hear Professor Khosmood welcome the group and introduce Brad Hill of Intel Corporation. Brad then demonstrates the creation of a simple game in real time. Brad showed how easy it is to program in JAVA creating an entire game live in about ten minutes and porting it to a multitude of platforms instantly using Intel’s XDK. Then he posted the finished game to a site where students could access the program. He displayed a QR code on the presentation screen and the students scanned the code and downloaded the demo game right to their phones in real time.

    Next it was time for the thirty-three game jam participating students to pitch their game concepts to the audience in search of team members to join in their game development. Each participant gave a brief overview of their concept for a game and explained what talent and skills their team need from interested potential members. Ideas ranged from fighting space ships to pong on steroids, to a meditating person swatting away corrupting distractions. After the pitches the students began to talk to one another to find the right team to join up with like a scene from American Idol Hollywood Week. 

    By this time it was 7PM and I’m thinking my day is drawing to a close. Not for these young game developer Padawans. It’s time for the groups to head to room 242 Of the Cal Poly Computer Science Department Building to carb up on pizza and prep for their twenty-four hour quest with every caffeine drink known to mankind. The night ends as we adjourned just after nine o’clock.

    Day two started bright and early at Saturday morning. The teams were now well established and roles and responsibilities were dividing up allowing team members to work-within their individual areas of interest and expertise across coding, graphics creation and music composition. The musician developers were easy to identify as they lugged in large music keyboards that would interface with the development laptops as the teams staked out their territories in the classroom. They coded with relentless focus for hours. I was amazed at how well they all worked together. They all seemed open to input from others and focused on the task at hand. Unlike Hollywood week there was no drama here. 

    After a short break for lunch and we enter the long haul of six hours of intense coding as they work to meet the deadline of completing a complete game by tonight. The level of collaboration remained impressive. Nonmusical students giving input on what they think the menu music should be like and how it could contrast with the game background music. Never did I hear one say, “That’s my area”. Or, “I don’t like that idea”. It is impressive how open these coders are to their teammates inputs. The spirit in the room is one of excitement, determination and cooperation.

    For the entire afternoon the room is mostly quiet except for discussions between teammates. You can see the intense concentration on their faces as they program, test and adjust their code. Work continues and the hours pass as the musicians craft their music and the artist create their characters and environments, and the coders make them all come to life and interact. Later in the afternoon the atmosphere begins to change and lighten up as bits of their games begin to come to life. You begin to hear an occasional sound effect reminiscent of vintage games like "Centipede" or "Asteroids" or chirping sounds and music, all that will come elements of the games. Faces begin to light up with smiles and laughter as test runs begin and these future software professionals see the first glimpse of characters and movement coming to life in the worlds they have created. 

    As I watched them work so diligently I thought about earlier in the day when I was trying to find the room in the morning. I asked a random student on campus if she could direct me to the Computer Science Building. Her face lit up with a smile and she said "That's my building. I’ll take you there". I ask her grade and she replied "I’m a senior". I asked if she did an internship and she answered with a smile and pride written all over her face as she told me she spent the summer at Apple, and that she has a job waiting for her after graduation. Professor Khosmood told me this is the norm for Cal Poly students. He then went on to tell me average starting salaries which I won’t tell you but let me just say “Well done Cal Poly”. I told him not many people get to go through life knowing daily that they have so tangibly, positively influenced and affected lives of others. Four years in the Cal Poly Computer Science department appears to have a measurable return on investment. But there's more to it than money and employment at play here. The smiles on the faces of these competitors as each element and component of their games perform as they intended and created, shows pride, accomplishment and self-esteem.

    Suddenly the mood changes within one team as a bug is discovered or a function doesn't perform as projected. One team member verbalizes the problem they have discovered. Another member asks a clarifying question, then another suggests a possible solution. No blame. No panic. No drama. A short time later all is well and they are back on track. The entire day and process has been one of harmonious interaction and collaboration. The entire process seems to be free of egos, emotions and arrogance. Is there no limit to what we can learn from our kids?

    As we enter than final two hour mark the silence is occasionally interrupted by smatterings of music, sound effects and laughter and cheers as the sounds of humorous alien, spaceship and motion sound effects come to life. As they begin to “test”, or should I say “play” their near completed games, shouting is common as they score points and high five each other as the games literally come to life.

    As the time comes to an end there is a slight sense of urgency to finish the apps to ready them for presentation to the team, but the teams are surprisingly calm. I am more stressed about finishing this article and photos than the students seem to be about finishing their games. They never appeared to doubt they will finish on time and the games will perform as they should.

    Time’s up. Everyone stops working. The intense twenty four hours from concept to a working game is done. After a dinner break it’s time for the unveiling. Each team introduces their team and presents their games. They laugh and cheer for each other’s games as we all watch as these games are played for the first time. I am stunned by how complete the games are. These aren’t stick figures with rudimentary animation. What was just an idea last night is now a completed operational game. And because the young developers worked using Intel’s XDK, the apps were created, ported and tested to work on multiple device form-factors including smart phones and tablets. Only thing left to do now is put them on an app store. And believe me, these games are good enough to sell. I may have just witnessed the creation of the next “Angry Birds”. Upon completion of the presentations the participants vote on their favorite “developer’s choice” award. But at this hackathon everybody is a winner. All ten apps will be feature in the Intel exhibit at the Game Developer Conference in Moscone Center in San Francisco just the following week. As part of the event sponsorship Intel is sponsoring one lucky team member from each team to make the trip to attend GDC. What a great opportunity for these young game developers to experience the world’s largest forum of game developers. Within a year they will all leave Cal Poly and America’s happiest City, San Luis Obispo and head in to the real world. Most likely many of them will end up in the real world of Silicon Valley. And they’ll hit the valley with a running head start. Today the annual Billionaire’s Club list came out. Snapchat's Evan Spiegel, just twenty-four years old ranked as one of the world's youngest billionaires. As I listened to the reading of the list on my car radio I had to wonder if one of these future software professionals is destined for that list.

    - END -

    Peter Morgan is a marketing manager at Intel Corporation and a graduate of Cal Poly.

    Peter Morgan

    916-622-9518

    Peter@petermorgan.com

  • calpoly
  • xdk
  • gamejam
  • gdc
  • game developers conference
  • Mobile Games
  • hackathon
  • 图标图像: 

  • 游戏开发
  • 英特尔 XDK
  • JavaScript*
  • 安卓*
  • 一劳永逸编码
  • 游戏开发
  • HTML5
  • 笔记本电脑
  • 电话
  • 平板电脑
  • 桌面
  • 开发人员
  • 教授
  • 学生
  • 安卓*
  • Apple iOS*
  • Microsoft Windows* 8
  • Tizen*
  • 主题专区: 

    HTML5

    包括在 RSS 中: 

    1
  • 高级
  • 入门级
  • 中级

  • Making Your Android* Application Login Ready Part II

    $
    0
    0

    Introduction

    In part I we explored adding a login screen to our restaurant application and then customizing the rest of the application based on the access level of who logs in. That way, when the manager logs in, they can do their managerial tasks like editing the menu and analyzing restaurant sales, while the customer can see their coupons and reward points. Part I can be read here:

    Making Your Android* Application Login Ready Part I

    Now in part II we will cover sending and receiving calls to and from a server to handle the user login logic. That way the user will be able to log into any tablet in the restaurant or any other chain location. The users will be stored in a MongoDB* on the server side which can be accessed by their RESTful endpoints using the Spring* IO library. To learn more about the server component and how to set it up:

    Accessing a REST Based Database Backend From an Android* App

    Adding app-to-server communication adds another layer of complexity to our application. We need to add error handling for when the tablet has no internet connection and for when the server is offline in addition to HTTP errors.

    Verify Internet Connection

    Before the customer logins in and tries to connect to the server, we will want to verify that the device is connected. There is no point in trying to talk to a server, when the device itself isn’t even on the network. So before launching into the login screen, we check that the device has access to either Wi-Fi or cell connection. 

    public Boolean wifiConnected(){
    ConnectivityManager connManager = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE);
    	NetworkInfo mWifi = connManager.getNetworkInfo(ConnectivityManager.TYPE_WIFI);
           	NetworkInfo mCellular = connManager.getNetworkInfo(ConnectivityManager.TYPE_MOBILE);
            	return (mWifi.isConnected() && mWifi.isAvailable()) || (mCellular.isConnected() && mCellular.isAvailable());
    }
    
    public void wifiNotConnected(){
    	Intent intent = new Intent(LoginViewActivity.this, OrderViewDialogue.class);
    	intent.putExtra(DIALOGUE_MESSAGE, getString(R.string.wifi_error));
    	startActivity(intent);
    	mUserFactory.logoutUser();
    	mSignInButton.setEnabled(true);
    	mRegisterButton.setEnabled(true);
    	MainActivity.setCurrentUser(null);
    	mSignInProgress= STATE_DEFAULT;
    	LoginViewActivity.this.finish();
    }
    

    Code Example 1: Check data connection

    We will do this in the onResume() method to ensure it is always checked before it starts the login activity. If wifi/data is connected, then we can launch the intent specific to the access level of the user who is logged in. 

    Figure 1: Screenshot of the restaurant application’s manager portal

    Async Task

    To make the calls to the server, we don’t want to interfere with the rest of the application by using the main UI thread. Instead we will use an AsyncTask to asynchronously make the call in the background. This will be used to make the login call for the HTTP GET, the register call for HTTP PUT, the update call for HTTP POST, and the delete for HTTP DELETE.

    To demonstrate how to use an AsyncTask, the following is how to set up the calls for the user login for the HTTP GET. When the user clicks login, we first retrieve the inputs and set up the AsyncTask as seen below. 

    final String email=mUser.getText().toString();
                    final String password=mPassword.getText().toString();
                    new AsyncTask<String, Void, String>() {
    		//… Async Methods
    }.execute();
    

    Code Example 2: Overview of AsyncTask for login call**

    The Async methods we need are onPreExecute, doInBackground, onPostExecute, onCancelled. In the first method, we give the user feedback that the application is starting to log in by setting the status message and disabling the buttons from subsequent login attempts. We will also set up a Handler to cancel the task should the server take too long to respond, this will trigger the onCancelled method to be called.  

    @Override
                        protected void onPreExecute() {
                            //set the state
                            mStatus.setText(R.string.status_signing_in);
                            mSignInProgress= STATE_IN_PROGRESS;
                            //disable subsequent log-in attempts
                            mSignInButton.setEnabled(false);
                            mRegisterButton.setEnabled(false);
                            //cancel the task if takes too long
                            final Handler handler = new Handler();
                            final Runnable r = new Runnable()
                            {
                                public void run()
                                {
                                    cancel(true);
                                }
                            };
                            handler.postDelayed(r, 15000);
                        }

    Code Example 3: AsyncTask onPreExecute() method **

    The doInBackground is self-explanatory; this is where our method to communicate to the server is called which will all happen in a thread separate from the main UI thread. Hence the user is free to continue exploring and it won’t appear that the app has frozen. 

    @Override
                        protected String doInBackground(String... params) {
                            String results="";
                            try {
                                mUserFactory.loginUserRestServer(email, password);
                            } catch (Exception e) {
                                results= e.getMessage();
                            }
                            return results;
                        }

    Code Example 4: AsyncTask doInBackground() method**

    Once the call to the server is complete and we get a response back, we move onto the onPostExecute method. Here we will handle displaying any errors to the user or informing them that they are now logged in. Note that setting the user variables in the code is done in the loginUserRestServer method that we called in the doInBackground(), you will see that explained later on in this article. 

    @Override
                        protected void onPostExecute(String result) {
                            mSignInProgress= STATE_DEFAULT;
                            if((result!=null) && result.equals("")){
                                Intent intent = new Intent(LoginViewActivity.this, OrderViewDialogue.class);
                                intent.putExtra(DIALOGUE_MESSAGE, String.format(getString(R.string.signed_in_as),MainActivity.getCurrentUser().firstName));
                                startActivity(intent);
                            }else{
                                mStatus.setText(String.format(getString(R.string.status_sign_in_error),result));
                                mSignInButton.setEnabled(true);
                                mRegisterButton.setEnabled(true);
                            }
                        }

    Code Example 5: AsyncTask onPostExecute() method**

    Finally, in the onCancelled method, we will inform the user that there was error and enable the buttons again so the user can retry. 

    @Override
                        protected void onCancelled(){
                            mStatus.setText("Error communicating with the server.");
                            mSignInButton.setEnabled(true);
                            mRegisterButton.setEnabled(true);
                        }

    Code Example 6: AsyncTask onCancelled() method **

    Server Calls

    For the GET call our Spring IO server, we will search for the user’s login credentials in the database using a findByEmailAndPassword query method defined on the server side.  It will return a JSON response which will be parsed into a local user variable. Our handler also notifies our navigation drawer to update and display the user level specific options. If you examine the code below you will see that we send the password as is to the server, in the real world you should encrypt it with a PBKDF2 hash with salt at the very least. There are also various encryption libraries online or switch to an HTTPS capable server. We will also check for any input errors here, thus eliminating any delay by sending bad input to the server to evaluate. 

    public void loginUserRestServer(String email, String password) throws Exception {
            if(email.length() == 0){
                throw new Exception("Please enter email.");
            }
            if(password.length()==0){
                throw new Exception("Please enter password.");
            }
    
            UserRestServer result = null;
            User user= new User();
            String url = "http://<server-ip>:8181/users/";
            RestTemplate rest = new RestTemplate();
            rest.getMessageConverters().add(new MappingJackson2HttpMessageConverter());
    
            try {
                String queryURL = url + "search/findByEmailAndPassword?name=" + email+"&password="+password;
                Users theUser = rest.getForObject(queryURL, Users.class);
                if (!(theUser.getEmbedded() == null)) {
                    result = theUser.getEmbedded().getUser().get(0);
                    user.setFirstName(result.getFirstName());
                    user.setLastName(result.getLastName());
                    user.setEmail(result.getEmail());
                    user.setAccessLevel(result.getAccessLevel());
                } else {
                    throw new Exception("No user found or password is incorrect");
                }
            }catch (Exception e) {
                if(e instanceof ResourceAccessException){
                    throw new Exception("Connection to server failed");
                }else {
                    throw new Exception(e.getMessage());
                }
            }
            MainActivity.setCurrentUser(user);
            Message input= new Message();
            mHandler.sendMessage(input);
        }

    Code Example 6: Login/GET Call to Rest Based Database Backend server **

    When there is a new user and they need to register, the application will send a POST call to our server to add them to the database. First we check that the email is not already taken by another user and then we create a new user object to add to the server. By default, we give the user customer access; an existing manager can then change their access later if needed through the application.  

        public void registerRestServer(String first, String last, String email, String password) throws Exception{
            if(first.length() == 0){
                throw new Exception("Please enter first name.");
            }
            if(last.length()==0){
                throw new Exception("Please enter last name.");
            }
            if(email.length()==0){
                throw new Exception("Please enter email.");
            }
            if(password.length()==0){
                throw new Exception("Please enter password.");
            }
            String url = "http://<server-ip>:8181/users/";
            RestTemplate rest = new RestTemplate();
            rest.getMessageConverters().add(new MappingJackson2HttpMessageConverter());
    
            try {
                String queryURL = url + "search/findByEmail?name=" + email;
                Users theUser = rest.getForObject(queryURL, Users.class);
                if (theUser.getEmbedded() == null) {
                    UserRestServer myUser = new UserRestServer();
                    myUser.setFirstName(first);
                    myUser.setLastName(last);
                    myUser.setEmail(email);
                    myUser.setPassword(password);
                    myUser.setAccessLevel(CUSTOMER_ACCESS);
                    rest.postForObject(url,myUser,Users.class);
                } else {
                    throw new Exception("User already exists");
                }
            }catch (Exception e) {
                if(e instanceof ResourceAccessException){
                    throw new Exception("Connection to server failed");
                }else {
                    throw new Exception(e.getMessage());
                }
            }
        }

    Code Example 7: Register/POST Call to Rest Based Database Backend server**

    For a manager updating a user’s access level, the PUT call requires the href of the user on the server. As our application doesn’t store any information on users besides the current user, we must do a GET call to server to find out the href first. 

    public void updateUserAccessRestServer(String email, String accessLevel) throws Exception{
            if(email.length()==0){
                throw new Exception("Please enter email.");
            }
            if(accessLevel.length()==0){
                throw new Exception("Please enter accessLevel.");
            }
    
            String url = "http://<server-ip>:8181/users/";
            RestTemplate rest = new RestTemplate();
            rest.getMessageConverters().add(new MappingJackson2HttpMessageConverter());
    
            try {
                String queryURL = url + "search/findByEmail?name=" + email;
                Users theUser = rest.getForObject(queryURL, Users.class);
                if (!(theUser.getEmbedded() == null)) {
                    theUser.getEmbedded().getUser().get(0).setAccessLevel(accessLevel);
                    String urlStr = theUser.getEmbedded().getUser().get(0).getLinks().getSelf().getHref();
                    rest.put(new URI(urlStr),theUser.getEmbedded().getUser().get(0));
                } else {
                    throw new Exception("User doesn't exist");
                }
            }   catch (Exception e) {
                if(e instanceof ResourceAccessException){
                    throw new Exception("Connection to server failed");
                }else {
                    throw new Exception(e.getMessage());
                }
            }
        }

    Code Example 8: Update/PUT Call to Rest Based Database Backend server**

    And again for the remove call, we need the href to delete the user if we are the manager. If it is the customer removing their own account though, the app can just referenced the user’s data (except for the password which is not stored). 

    public void removeUserRestServer(String email, String password, boolean manager) throws Exception{
            if(email.length()==0){
                throw new Exception("Please enter email.");
            }
            if(password.length()==0 && !manager){
                throw new Exception("Please enter password for security reasons.");
            }
    
            String url = "http://<server-ip>:8181/users/";
            RestTemplate rest = new RestTemplate();
            rest.getMessageConverters().add(new MappingJackson2HttpMessageConverter());
    
            try {
                String queryURL;
                String exception;
                String urlStr;
                if(manager) {
                    queryURL = url + "search/findByEmail?name=" + email;
                    exception= "User doesn't exist";
                }else{
                    queryURL= url + "search/findByEmailAndPassword?name=" + email+"&password="+password;
                    exception= "User doesn't exist or password is incorrect";
                }
                Users theUser = rest.getForObject(queryURL, Users.class);
                if (!(theUser.getEmbedded() == null)) {
                    if(manager) {
                        urlStr = theUser.getEmbedded().getUser().get(0).getLinks().getSelf().getHref();
                    }else{
                        urlStr = MainActivity.getCurrentUser().getHref();
                    }
                    rest.delete(new URI(urlStr));
                } else {
    
                    throw new Exception(exception);
                }
            }   catch (Exception e) {
                    if(e instanceof ResourceAccessException){
                        throw new Exception("Connection to server failed");
                    }else {
                        throw new Exception(e.getMessage());
                    }
                }
        }

    Code Example 9: Remove/DELETE Call to Rest Based Database Backend server**

    If you already have a regular HTTP server that you would like to use, below is some example code for the GET call. 

        public void loginUserHTTPServer(String email, String password) throws Exception {
            if(email.length() == 0){
                throw new Exception("Please enter email.");
            }
            if(password.length()==0){
                throw new Exception("Please enter password.");
            }
    
            User result = new User();
            DefaultHttpClient httpClient = new DefaultHttpClient();
            String url = "http://10.0.2.2:8080/user";
    
            HttpGet httpGet = new HttpGet(url);
    
            HttpParams params = new BasicHttpParams();
            params.setParameter("email", email);
            params.setParameter("password", password);
            httpGet.setParams(params);
            try {
                HttpResponse response = httpClient.execute(httpGet);
    
                String responseString = EntityUtils.toString(response.getEntity());
                if (response.getStatusLine().getStatusCode() != 200) {
                    String error = response.getStatusLine().toString();
                    throw new Exception(error);
                }
                JSONObject json= new JSONObject(responseString);
                result.setEmail(email);
                result.setFirstName(json.getString("firstName"));
                result.setLastName(json.getString("lastName"));
                result.setAccessLevel(json.getString("accessLevel"));
            } catch (IOException e) {
                throw new Exception(e.getMessage());
            }
            MainActivity.setCurrentUser(result);
            Message input= new Message();
            mHandler.sendMessage(input);
        }

    Code Example 7: Login Call to a HTTP server**

    Summary

    This series of articles has covered how to add login capabilities to our restaurant application. We added a login screen for users and some special abilities for manager’s to manage the users and the menu. And now in part two the application can now talk to our server and login seamlessly across different tablets.

     

    References

    Making Your Android* Application Login Ready Part I

    Accessing a REST Based Database Backend From an Android* App

    Building Dynamic UI for Android* Devices

    About the Author

    Whitney Foster is a software engineer at Intel in the Software Solutions Group working on scale enabling projects for Android applications.

     

    *Other names and brands may be claimed as the property of others.
    **This sample source code is released under the Intel Sample Source Code License AgreementLike   SubscribeAdd new commentFlag as spam  .Flag as inappropriate  Flag as Outdated 

     

  • Android
  • UI
  • java
  • login
  • 开发人员
  • 学生
  • Android*
  • 安卓*
  • Java*
  • 中级
  • 电话
  • 平板电脑
  • URL
  • 主题专区: 

    Android
  • 安卓*
  • Quick Installation Guide for Media SDK on Windows with Intel® INDE

    $
    0
    0

    Intel® INDE provides a comprehensive toolset for developing Media applications targeting both CPU and GPUs, enriching the development experience of a game or media developer. Yet, if you got used to work with the legacy Intel® Media SDK or if you just want to get started using those tools quickly, you can follow these steps and install only the Media SDK components of Intel® INDE.

    Go to the Intel® INDE Web page, select the edition you want to download and hit Download link:

    At the Intel INDE downloads page select Online Installer (9 MB):

    At the screen, where you need to select, which IDE to integrate Getting Started tools for Android* development, click Skip IDE Integration and uncheck the Install Intel® HAXM check box:

    At the component selection screen, select only Media SDK for Windows, Media RAW Accelerator for Windows, Audio for Windows and Media for Mobile in the Analyze/Debug category (you are welcome to select any additional components that you need as well), and click Next. Installer will install all the Media SDK  components.

    Complete the installation and restart your computer. Now you are ready to start analyzing your game or media application performance with Intel® Media SDK components!

    If later you decide that you need to install additional components of the Intel® INDE suite, rerun the installer, select Modify option to change installed features:

    and then you can select additional components that you need:

    Complete the installation and restart your computer. Now you are ready to start using additional components of the Intel® INDE suite!

     

  • mediasdk
  • Media for Mobile
  • Media RAW Accelerator
  • 开发人员
  • 教授
  • 学生
  • Android*
  • Apple iOS*
  • Apple OS X*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • 游戏开发
  • Windows*
  • C/C++
  • Java*
  • 高级
  • 入门级
  • 中级
  • Media SDK Windows* 版
  • 媒体客户端解决方案
  • 移动媒体
  • 英特尔® Integrated Native Developer Experience
  • Microsoft DirectX*
  • 开发工具
  • 游戏开发
  • 图形
  • Microsoft Windows* 8 Desktop
  • 平板电脑
  • 桌面
  • URL
  • 开始
  • 主题专区: 

    IDZone
  • Windows*
  • Cómo me enamoré de la tecnología Intel RealSense

    $
    0
    0

     

    Informática perceptual


    Como soy un gran fan de la realidad virtual y aumentada, la primera vez que oí el nombre "informática perceptual" hace más de un año, me sonó a algo diferente y creí que se relacionaba con la informática en la nube. Después de asistir a una sesión informativa, me di cuenta de que se trata de una tecnología de avanzada que dará vida a las PC. Mi idea se confirmó cuando leí el artículo de Mooly Eden "Intel apuesta a que la informática perceptual salvará la PC (entrevista)".

    Todavía recuerdo un entretenido video que miré sobre informática perceptual, que incluso una persona sin conocimientos avanzados puede entender. 

    * Mooly Eden dirigía las actividades sobre informática perceptual en Intel


    Tecnología Intel RealSense: mi primer contacto

     

    Antes de oír acerca de la cámara y el SDK de informática perceptual, seguía muy de cerca Kinect y Leap Motion 3D (que ahora es el controlador de movimientos Leap). Con el paso del tiempo, la Informática Perceptual de Intel ha madurado y ahora está bien delineada y posicionada como tecnología Intel RealSense. Es capaz de detectar a la perfección hasta la punta de los dedos, además de capturar movimientos, hacer escaneos tridimensionales y mucho más.

    Cuando fui a dar una charla en uno de los laboratorios de portación de Unity3D*, asistí a una sesión sobre Intel RealSense y me enamoré de inmediato porque se integra con Unity3D. ¡¿No es fabuloso?! Todavía me acuerdo de cómo controlé todo el sistema solar con las manos. La experiencia fue envolvente y realista.


    Una demostración de Intel RealSense desde cerca


    Qué hacer como desarrollador

    Si después de leer esto, piensa en hacer realidad sus ideas, visite ya mismo la Zona Intel de desarrolladores, regístrese e instale en su máquina el SDK de Intel RealSense.

    También puede probar RealSense con la cámara de profundidad de Dell Venue 8 7000.

     

  • RealSense
  • Intel RealSense Technology
  • Intel RealSense
  • 图标图像: 

  • 技术文章
  • 英特尔® 实感™ SDK
  • 英特尔® 实感™ 技术
  • 感知计算
  • Unity
  • 英特尔® 实感™ 技术
  • 前置 F200 照相机
  • 笔记本电脑
  • 平板电脑
  • 开发人员
  • Microsoft Windows* 8
  • 主题专区: 

    RealSense
  • 英特尔® 实感™ 技术
  • 包括在 RSS 中: 

    1
  • 入门级
  • Check out the Parallel Universe e-publication

    $
    0
    0

    The Parallel Universe is a quarterly publication devoted to exploring inroads and innovations in the field of software development, from high performance computing to threading hybrid applications.

    Issue #20 - Cover story: From Knights Corner to Knights Landing: Prepare for the Next Generation of Intel® Xeon Phi™ Technology, by James Reinders, Director of Parallel Programming Evangelism, Intel

    The Parallel Universe Archive

    Sign-up for future issues

    图标图像: 

  • Data Center
  • 大型企业
  • Intel® Many Integrated Core Architecture
  • 开源
  • 优化
  • 并行计算
  • 矢量化
  • 英特尔® Composer XE
  • 英特尔® Fortran Composer XE
  • 英特尔® Parallel Composer
  • 英特尔 Inspector XE
  • 英特尔® VTune™ 放大器
  • 英特尔® Parallel Studio XE Professional Edition
  • 英特尔® Parallel Inspector
  • 具有机器学习的 Intel® Platform Modeling
  • OpenMP*
  • 网络
  • 服务器
  • Windows*
  • 笔记本电脑
  • 服务器
  • 平板电脑
  • 桌面
  • 开发人员
  • 教授
  • 学生
  • 主题专区: 

    IDZone

    包括在 RSS 中: 

    1
  • 入门级
  • 中级
  • Diseño de aplicaciones para la tecnología Intel® RealSense™

    $
    0
    0

    Por Ryan Clark, Chronosapien Interactive

     

    Download PDF[PDF 571KB]

     

     

    Introducción

    Cuando diseñamos para tecnologías de medios emergentes (por ejemplo, control de gestos), nuestra meta como desarrolladores de aplicaciones es que la experiencia sea entretenida para el usuario, además de resultarle intuitiva y familiar, y que le despierte entusiasmo. El diseño de navegación debe ser lo suficientemente intuitivo como para que, cuando los usuarios inicien una aplicación por primera vez, comiencen de inmediato a explorar sus funcionalidades. En nuestros experimentos más recientes con la tecnología Intel® RealSense™, nos esforzamos por crear una aplicación en la cual el usuario pudiera meterse de lleno, e intercalar a la vez suficientes funcionalidades interesantes de gestos como para mantenerlo cautivado. En lugar de pensar en la tecnología Intel RealSense como reemplazo de la entrada estándar de información, nos concentramos en las ventajas de los gestos naturales y en las funcionalidades exclusivas que ofrece el Kit de Desarrollo de Software (SDK) de Intel RealSense. Nuestra primera aplicación, Space Between, se centra en el seguimiento de manos y rostro, mientras que nuestra segunda aplicación incorpora más exclusividades del SDK, incluidas la detección de emociones y la segmentación de usuarios. En nuestro trabajo, aprendimos varias lecciones que a los desarrolladores podrían resultarles útiles, como diseñar gestos para facilitar el uso, hacer corresponder gestos con la mecánica de juego diseñada, desarrollar interfaces que se vuelvan familiares para el usuario y crear menús fáciles de usar y entender.

    Diseño de información de entrada para la tecnología Intel® RealSense™


    Figura 1:Space Between, desarrollada para usar la tecnología Intel® RealSense™.

    Cuando creamos nuestra primera aplicación con Intel RealSense, comenzamos el proceso de diseño con la plataforma en mente. En lugar de determinar cómo portaríamos un estilo de juego al control de gestos, pensamos en las interacciones singulares disponibles mediante el control de gestos y qué experiencias podríamos concebir en función de ellos. Como nuestro desarrollo comenzó con el SDK de Intel® Perceptual Computing (el predecesor de la tecnología Intel RealSense), dirigimos nuestra atención a la posición bidimensional de las manos y su apertura como principales interacciones del usuario, las cuales forman la base de nuestra mecánica de juego. Con solo estas dos interacciones simples, queríamos dar a los usuarios una amplia variedad de interacciones posibles de juego. La mayoría de los cambios en las interacciones provenían simplemente de modificar la orientación de la mano del usuario, lo cual daba una sensación diferente a los gestos, aunque los valores medidos eran los mismos.

    La principal aplicación que desarrollamos con la tecnología Intel RealSense es Space Between. Se trata de un juego desarrollado en Unity* en el que el jugador controla diferentes criaturas para explorar las profundidades del océano [Fig. 1]. Está dividido en múltiples minijuegos (que siguen un orden) y cada uno se centra en una criatura y una modalidad de entrada de información diferente. Cada gesto se usa de una manera que imita el movimiento de la criatura correspondiente y dirige a los personajes en forma directa. Por lo general se usan “uno a uno”: la mano se orienta de modo tal que se alinea con la criatura y tiene un efecto inmediato en el movimiento del personaje, por eso los controles son fáciles de entender y aprender.

    Al diseñar estos minijuegos, sabíamos que necesitábamos comenzar con la entrada de gestos en mente. A partir de allí, iteramos en cada uno hasta que se adecuara. Después de usar el seguimiento de manos, el seguimiento de rostro y el reconocimiento de voz del SDK de Intel Perceptual Computing, llegamos a la conclusión que el módulo de seguimiento de mano era lo que más nos entusiasmaba. Al hacer la transición al SDK de Intel RealSense, nos encontramos con que los mejores módulos eran los relacionados con el seguimiento de manos, aunque la fortaleza del SDK radica en la cantidad de módulos que ofrece. Los minijuegos comenzaron todos con el seguimiento de manos como control principal, y se usaba el seguimiento de cabeza para atenuar los problemas con los gestos prolongados (hablaremos de esto más adelante).


    Figura 2:Movimiento de ola en la etapa The Sunlight Zone.

    En nuestro primer minijuego, The Sunlight Zone, el jugador controla una tortuga de mar en la vista de perfil. El diseño del juego comenzó con la idea de usar un gesto que imitara sacar la mano por la ventanilla de un automóvil; es decir, mover con fluidez la mano hacia arriba y hacia abajo con un movimiento similar al de una ola [Fig. 2]. La tortuga imita el movimiento de la mano del jugador y gana velocidad con cada ola que se completa. Originalmente, la única información de entrada era la posición de la mano del usuario sobre el eje y en la ventana de visualización como meta del personaje que controlaba el jugador. Después de la etapa de elaboración de prototipos, pudimos lograr un gesto más preciso con el uso del ángulo de la mano. Con este método, podíamos hacer que la tortuga reaccionara según el ángulo de la mano del usuario, lo cual daba la sensación de que la interacción era más sensible. Para obtener el ángulo de la mano a partir de la orientación de la palma del módulo de seguimiento de manos, seleccionamos un eje [Fig. 3].


    Figura 3:Ejemplo de código de selección de un eje a partir de los datos de la mano.

    Este era un gesto fácil de enseñar a nuevos jugadores, pero después de las pruebas observamos que resultaba extenuante después de menos de un minuto. Por esta consecuencia física, aprendimos acerca de la “resistencia consumida” [Fig. 4], que es la medida de cuánto se cansan los brazos cuando están levantados mientras realizan gestos. El problema era que el codo debía levantarse de manera perpendicular al cuerpo, de modo que aquel no podía sostener el resto del brazo. Este resulta ser uno de los gestos más agotadores.


    Figura 4:Fórmula de resistencia consumida (fuente: Consumed Endurance (CE) – Measuring Arm Fatigue during Mid-Air Interactions, de http://blog.jhincapie.com/projects/consumed-endurance-ce-measuring-arm-fatigue-during-mid-air-interactions/).

    Nos seguía gustando el movimiento de ola para controlar el personaje, pero para jugar con comodidad (y durante períodos prolongados), los usuarios tenían que poder apoyar los codos. Agregamos al juego una velocidad crucero, en la cual el personaje no avanza tan rápido y se lo puede controlar exclusivamente con el ángulo de la mano. Esto permite seguir jugando sin sentirse penalizados ni con la obligación de hacer el gesto de movimiento de ola durante mucho tiempo seguido.

    Incluso después de agregar el ángulo de la mano para reducir el cansancio, los jugadores seguían necesitando tiempo para recuperarse antes del siguiente minijuego, The Midnight Zone. La manera de darles un respiro fue agregar un minijuego que no usaba gestos de mano. Para controlar el personaje en The Twilight Zone, el jugador simplemente se inclina en cualquiera de las cuatro direcciones, imitando el movimiento del personaje (una ballena). En cuanto al código, el seguimiento de estos movimientos de inclinación proviene del seguimiento de la posición central de la cabeza, tanto la profundidad como la posición en x en la ventana de visualización.

    Diseño de interfaces para la tecnología Intel® RealSense™

    No nos llevó mucho tiempo darnos cuenta de que diseñar aplicaciones basadas en gestos no era un proceso directo. Para nuestra versión de demostración de Space Between, necesitábamos incluir controles simples para la selección de los minijuegos. El caso de uso para estos controles era el de botones simples: solo necesitábamos una manera de seleccionar una opción y aceptarla. Nuestra primera interfaz basada en gestos reemplazó al control por mouse [Fig. 5]. La posición de la mano se utilizó para seleccionar, mientras que el gesto de pulsar (y más adelante, pulgares arriba) se usó para aceptar, con control por voz como respaldo. Si bien se trataba de una solución rápida (y temporal), vimos que usar la mano para seleccionar de un menú con este método era difícil y tedioso. A menudo, al hacer un gesto el cursor cambiaba de posición, por lo que se necesitaban botones con áreas de selección grandes. Nuestra iteración de esto fue dividir la ventana de visualización en tres y, para seleccionar, solo usar la posición de la mano sobre el eje x.


    Figura 5:Nuestro menú inicial para seleccionar minijuegos de la demo de Space Between.

    Nuestra siguiente iteración fue agregar un gesto de deslizamiento horizontal a la derecha o la izquierda [Fig. 6] para cambiar de juegos mediante el giro de un menú circular. Con un gesto de pulsar (o pulgares arriba) se seleccionaba el menú activo. Fue una mejora desde el aspecto visual (realmente alentaba la interacción del usuario) y redujo los falsos positivos y las selecciones accidentales. Descubrimos que al diseñar interfaces para control por gestos, es importante hacer hincapié en la sensibilidad mediante respuestas visuales y sonoras. Esto ayudó a compensar la pérdida de respuesta táctil.


    Figura 6:La versión siguiente del menú de selección de minijuegos de la demo de Space Between.

    Cuando diseñamos interfaces intuitivas, con frecuencia aplicamos ideas de interfaces móviles más que de las usadas para entornos de PC. Deslizar horizontalmente y tocar son gestos simples ya familiares para los usuarios, por eso continuamos analizando maneras de convertirlos al medio de gestos. Algo que se debe tener en cuenta cuando se usa el SDK para Intel RealSense es que los deslizamientos horizontales hacen referencia a algo específico: el movimiento de cada mano en una dirección explícita [Fig. 7]. Pero los gestos de saludo no tienen una dirección definida. Si se quiere que una mano haga el movimiento de saludo en ambos sentidos, hay que seguir la posición de la mano y determinar su velocidad. La ventaja de hacer esto es que como la mano del usuario comienza por reconocerse como en movimiento de deslizamiento horizontal, el tiempo y la velocidad del deslizamiento se pueden determinar con exactitud. Esto posibilita agregar impulso a las selecciones, de manera similar a lo que los usuarios están acostumbrados a hacer en los dispositivos móviles.


    Figura 7:De la documentación de Intel: gesto de deslizar horizontalmente y gesto de saludar.

    Aunque estas soluciones funcionan bien para navegar menús, nos dimos cuenta de que a veces los menús se vuelven directamente innecesarios en nuestra aplicación. Cuando diseñamos nuestro juego, a menudo tomamos como referencia Journey. Para aquellos que no lo conocen, se trata de un espléndido juego de aventura artístico creado por esa empresa de juegos que recurre al minimalismo para que se destaquen los elementos del juego. La pantalla de inicio tiene de fondo un desierto y las palabras “Start: New Journey” (en castellano: “Iniciar: nuevo viaje”). Se usa la mínima cantidad posible de menús y los controles se enseñan mediante animaciones transparentes [Fig. 8]. Al diseñar la pantalla de inicio de Space Between, decidimos omitir por completo la selección de etapas y concentrarnos en ofrecer como primera interacción del usuario una experiencia de juego. Cuando se reconoce la mano del usuario, los movimientos comienzan a agitar el aire frente a ellos, formando ráfagas de viento. Mientras el usuario juega con esa escena sencilla que tiene enfrente, las ráfagas balancean el bote y se inicia la experiencia del juego. En lugar de obligar al jugador a seleccionar una etapa específica, se juega cada uno de los minijuegos uno detrás del otro.


    Figura 8: Captura de pantalla del juego Journey que muestra el uso minimalista de la interfaz de usuario para las instrucciones.

    Cuando se diseñan menús (o mecánicas de juego) que requieren gestos, es importante agregar representaciones gráficas. Parece algo obvio, pero permite al usuario interactuar rápido sin tener que aprender cada una de las opciones. Es especialmente importante cuando no siempre es posible usar un gesto intuitivo para seleccionar opciones de menú. Al enseñar al jugador cómo realizar los gestos necesarios para la mecánica de nuestro juego, mantuvimos las representaciones gráficas como hojas de sprites simples animadas [Fig. 9]. A partir de ellas, el jugador puede determinar la orientación de la mano (o la cabeza), qué mano usar (o, en algunos casos, ambas) y el movimiento necesario para hacerlos. Como nuestro juego comienza sin repercusiones, hacer que el usuario aprenda qué acciones manejan los gestos no fue un problema. Optamos por un enfoque explorador del juego, puesto de relieve por el peligro cada vez mayor de las etapas. Debido a que el jugador aprende los gestos en los primeros minijuegos, usamos los mismos íconos en los posteriores para que las interacciones continúen resultando familiares.


    Figura 9:Instrucción de hoja de sprites para realizar un movimiento de ola en Space Between.

    Como los usuarios no están familiarizados con la mayoría de las interacciones, la comunicación de respuestas a cada acción es importante. El reconocimiento de gestos no es perfecto, pero eso el usuario necesita saber cuando un movimiento no se ha reconocido. En nuestra versión de demostración de Space Between, estas respuestas eran evidentes para el usuario: se mostraban en la parte superior de la pantalla en todo momento [Fig. 10]. A medida que se reconocían las manos, la cabeza y algunos gestos, los íconos correspondientes se iban desvaneciendo. En la versión completa de la aplicación, nos decidimos por un enfoque más integrado. Cuando el usuario ya no proporciona información de entrada, las criaturas vuelven a un estado predeterminado. Como ejemplo, en Sunlight Zone, cuando no se reconoce la mano del usuario, la tortuga de mar que este controla gira hacia atrás para nadar recto y cambia su estado de animación. Diseñamos todos los personajes de manera que cuando los estuviera controlando el jugador resplandecieran con un color específico. Para los juegos que usan cursores, logramos hacer que se desvanecieran o se hicieran sólidos, y complementamos esto con indicaciones sonoras cuando se recibe o pierde la información de entrada.


    Figura 10:Respuesta visual de detección de manos y cabeza en la versión demo de Space Between.

    Al integrar menús complejos, hallamos que no siempre es necesario integrar gestos como control principal. Si el uso de la aplicación lo permite, recurrir al mouse y el teclado para los elementos más tediosos (controles deslizantes e ingreso de datos) es mucho menos frustrante para el usuario. Es cierto que los gestos funcionan bien para botones y para alternar estados, pero usar datos de posición que requieren de varios ejes puede ser difícil de controlar para el usuario. Una manera de remediarlo es implementar un modo de entrada de datos que use un eje de movimiento cuando se realice un gesto de agarre (apertura o pellizco), pero esto no resuelve el problema de base. Aunque la tecnología de gestos está mejorando mucho, la mayoría de los usuarios todavía no la han utilizado. Si no es posible tener modalidades comunes para el ingreso de datos principales, la mejor solución es hacer los menús grandes. Tener una modalidad común de ingreso de datos como plan b no es una mala opción.

    Al tomar decisiones sobre gestos para controlar menús que no siempre se pueden mostrar, seleccionar el gesto es sumamente importante. Pero, como mencionamos antes, muchas de estas acciones todavía no tienen movimientos o gestos asociados en la base de conocimientos del usuario. Como estudio de caso, uno de los ejemplos más notables es un menú de pausa (u opciones). Mostrar un menú de pausa es importante en la mayoría de los juegos y debería ser uno de los gestos que más rápido pudiese realizar para el usuario y reconocer la aplicación. Pero esto trae muchos problemas de diseño. Los gestos de otros medios conocidos (aplicaciones de mouse y teclado, tabletas, dispositivos móviles) no tienen nada en común. En los teclados se usa la tecla “Escape”, mientras que en los smartphones tiende a deslizarse el dedo desde el borde izquierdo de la pantalla (pero incluso esto no es algo que valga siempre). Por lo general, en esta acción interviene el extremo superior izquierdo, pero aun así muchos usuarios lo relacionan con el botón “Cerrar” de las aplicaciones de escritorio y buscan el extremo superior derecho. Usar esquinas específicas de la pantalla o gestos de deslizar no da buen resultado, por que se pierde seguimiento y por la activación accidental, respectivamente. En aplicaciones Intel RealSense, Intel recomienda usar el signo “v” [Fig. 11] para que aparezca un menú principal. La justificación es que se trata de un gesto fácil de reconocer y con poca probabilidad de realizarlo por accidente. Si bien no es intuitivo ni resulta familiar a los usuarios, la respuesta parece ser apostar a que con el tiempo comience a hacerse la relación. Además de implementar este gesto para el menú de pausa, agregamos múltiples sistemas redundantes. Si se pierde el seguimiento (las manos del usuario salen de los límites de la cámara) durante una cantidad de tiempo específica, aparece un menú (y métodos familiares con mouse y teclado).


    Figura 11:El signo “v” de la documentación de Intel sobre RealSense, sugerido para llamar menús.

    Múltiples módulos con la tecnología Intel® RealSense™

    Cuando se implementan varios módulos del SDK de Intel RealSense, hay que considerar algo más que la facilidad de uso y la familiaridad: el rendimiento también pasa a tener importancia. Al trabajar con varios módulos, es importante hacer pausas y esperar a inicializar los módulos. Para Space Between, intercambiamos de módulo activo durante los cambios de escena, para que el usuario no note una reducción en la velocidad de fotogramas ni pérdida de seguimiento. Antes de cargar la escena, revisamos si hay diferencias en cuanto a los módulos requeridos, y si las hay, ejecutamos la inicialización. Intercambiar módulos activos con el SDK es sencillo: se inicializan los nuevos módulos y se llama a la función SenseManager del SDK. En nuestra aplicación, pausamos módulos cuando ya los usamos (p. ej., reconocimiento facial) o cuando el usuario no tiene control sobre la aplicación (p. ej., desactivar el seguimiento de rostro cuando se muestra un menú).

    Al trabajar con los módulos del SDK, en especial los que usan fuentes de cámara, hay que lograr un equilibrio entre la velocidad de fotogramas y la regularidad de los datos. Si se utiliza AcquireFrame para recopilar nuevos datos, apagar la espera de todos los módulos y ajustar el tiempo de espera máximo sirve para reducir las fluctuaciones generales y aumentar la velocidad de fotogramas, al costo de perder algunos datos si el tiempo de espera cae demasiado. Las computadoras lentas necesitan que se les dé más tiempo para procesar los datos de los fotogramas, mientras que las computadoras rápidas no necesitan tanto tiempo. En Unity, esto se puede simplificar de modo tal que las configuraciones de juego más rápidas (menor complejidad gráfica) deriven en la asignación de más tiempo para procesar datos, y lo opuesto para las configuraciones gráficas más complejas. Una herramienta para este fin es QualitySettings, que viene integrada en Unity [Fig. 12].


    Figura 12:Ejemplo de código que muestra RealSense ejecutándose en el subproceso de Unity con el tiempo de espera que depende de los ajustes de calidad.

    Conclusión

    La tecnología de gestos todavía es muy nueva. Por esa razón, diseñar aplicaciones basadas en gestos requiere de mayor iteración de lo normal, aunque usar una aplicación con reconocimiento de gestos bien diseñada bien vale la pena. Hay que tener siempre presente los conocimientos del usuario y tomar elementos de aplicaciones y medios con los cuales aquel esté familiarizado. El uso de menús debe reducirse al mínimo. Y sobre todo, no hay que temer probar cosas nuevas, aunque después se termine cambiándolas.

    Futuras mejoras y cambios para Space Between

    Hemos aprendido mucho del desarrollo de la aplicación de demostración y la versión completa de Space Between, y lo emplearemos para continuar mejorándola. Si bien se invirtió mucho trabajo en lograr que la mecánica de juego fuese lo más intuitiva y fácil posible, todavía se pueden hacer cosas para mejorarla más. Por ejemplo, la demo tenía respuestas visuales en la interfaz de usuario cuando se detectaban las manos y la cabeza del usuario. Con la idea de que la interfaz de usuario tuviera un diseño aún más minimalista, las dejamos de lado, pero nunca llegamos a incluir su reemplazo: respuestas visuales integradas en los personajes y el entorno mismo. Nuestra idea era que en lugar de tener una interfaz gráfica de usuario fija en la parte superior de la pantalla, que se viera todo el tiempo, haríamos que se iluminaran partes de los personajes para indicar que el usuario ahora las controlaba. Esto resuelve el problema de informar al usuario que el sistema ha reconocido su información de entrada, sin abarrotar el juego y manteniendo el entorno como centro de la atención.

    Además de las funcionalidades relacionadas con Intel RealSense, hay otras que no quedaron en la versión actual de Space Between. Cuando diseñamos la versión completa del juego, investigamos mucho sobre la vida marina, en especial en las grandes profundidades. Algo que nos cautivó fue el mundo de la bioluminiscencia y cuánto dependen de ella las criaturas de los océanos. Teníamos muchas ganas de incorporar esto al juego porque sentíamos que era necesario para contar la historia de los océanos, pero también porque era fabuloso. En la versión actual del juego, pueden verse algunos de nuestros intentos de integrar la bioluminiscencia al entorno: los puntos que se acumulan son representaciones vagas de ella, las anémonas de mar la liberan en The Midnight Zone, hay criaturas que la liberan al morir en The Trenches. Sin embargo, esto no llega a ser como la versión completa de bioluminiscencia que teníamos en el juego y no le hace justicia a su belleza en la naturaleza.

    Acerca del autor

    Ryan Clark es uno de los fundadores de Chronosapien Interactive, una empresa de Orlando. Chronosapien Interactive desarrolla software dirigido a medios interactivos y se especializa en tecnologías emergentes. Actualmente está trabajando en una demostración para The Risen, se segunda aplicación en usar la tecnología Intel RealSense. Se puede seguir a la empresa en chronosapien.reddit.com, o comunicarse con ella a theoracle@chronosapien.com.

  • Space Between
  • Chronosapien Interactive
  • Intel RealSense
  • Intel RealSense SDK
  • Gesture Recognition
  • 开发人员
  • Microsoft Windows* 8
  • 游戏开发
  • 英特尔® 实感™ 技术
  • 用户体验
  • Windows*
  • Unity
  • 中级
  • 英特尔® 实感™ SDK
  • 英特尔® 实感™ 技术
  • 游戏开发
  • 前置 F200 照相机
  • 笔记本电脑
  • 平板电脑
  • URL
  • 主题专区: 

    RealSense
  • 英特尔® 实感™ 技术
  • CGCC Healthy Kids in Motion Hackathon

    $
    0
    0

    In one of our best hackathons yet, 55 students from Chandler-Gilbert Community College met on campus to create games teaching impressionable grade-school kids healthy lifestyle choices and information regarding fitness and wellness.  Those students broke into 10 teams and created the proof-of-concept demos below from scratch over the course of this fast-paced 24-hour gamejam.  Tutorials and breaks aside, these students had a mere 16 hours of development time.

    Volunteers from Intel, faculty from CGCC, hackathon veteran student mentors, and subject matter experts guided these creative students – most having little to no previous experience with JavaScript – through the self-driven learning process, with many of them getting their games playable on mobile devices!

    Huge thanks to all the volunteers who made this event a success:

    • Intel Volunteers
      • David Baker – organizer, ideation leader
      • Erica McEachern – project manager
      • Ashish Datta – lead mentor, demo leader
      • Shafiul Islam – room-specific mentor
      • Gigi Marsden – logistics (and her son as a mentor)
      • Sowmya Ravichandran – floating mentor
      • Suresh Golwalkar – floating mentor
      • Robert Alvarez – floating mentor
      • Ed Langlois – floating mentor
    • CGCC Volunteers
      • Patricia Baker – facilitator, gracious hostess
      • Cindy Barnes Pharr – facilitator, facility queen
      • Margie Gomez – fresh-perspective blogger
      • Richard Woodward-Roth – room-specific mentor
      • Mark Underwood – floating mentor
      • Colton Riffel – student mentor
    • Other Volunteers
      • Chris Moody – lead mentor, co-blogger
      • Andrew Datta – feedback wrangler, co-blogger
      • Fabian Hinojosa – photographer
      • Sunny Liu – room-specific mentor
    • Subject Matter Experts
      • Robin Sprouse – nutrition expert
      • Amy Widmeyer – nutrition expert
      • Dr. Greg Trone – fitness expert

    Some pictures from the event are posted to the Facebook page.

    The gallery below with playable versions of the demos is available as a zip file (download, unzip to a folder, run the cgccHKHgames.html file in the top directory – preferably in Google Chrome).

    CGCC Healthy Kids Hackathon

    Nov 21-22 2014

    These games were created by students from Chandler Gilbert Community College for the purpose of helping children learn healthy habits in nutrition and fitness


    "Froot"

    This app is meant to teach children the importance of good foods and portion control. It's similar to Fruit Ninja where you slash the good items and disregard the bad items.


    "Health RPG"

    An RPG centered around healthy eating and living for kids 6-11 years old.


    "RUN"

    Shopping for healthy food give more energy


    "PAC2"

    Our idea was to have PacMan eating good foods for a buff, junk foods result in slower movement speed


    "Unstoppable Weight Loss Tactics"

    A simple fitness tracker aimed at a young audience, "Unstoppable" features a leveling system on the fitness tracker and progress saving.


    "Food Facts"

    Drag food onto character for points and information


    "Little Chef's"

    Game on picking healthy food on your plate for kids of 4 - 8 years old.The game displays healthy and unhealthy food choices. The player gets to pick the food choices. Sad and happy faces appear based on their choice.


    "World of Storecraft"

    Aid youth in benefits and consequences of their dietary choices


    "Gone Bananas"

    Our app is created to help children ages 6-11 to have fun learning about a and living a healthy lifestyle including the right diet and physical activity


    "Food Ninja"

    Catching healthy foods for points.


     

     

    Brad Hill

    Engineering Director of Student/Indie Hackathons

    Intel – SSG-DRD Core Client Scale Engineering

    richard.b.hill@intel.com

  • Code for Good
  • hackathon
  • healthy living
  • html5
  • javascript
  • 图标图像: 

  • 游戏开发
  • HTML5
  • JavaScript*
  • 一劳永逸编码
  • 游戏开发
  • HTML5
  • 笔记本电脑
  • 电话
  • 平板电脑
  • 桌面
  • 开发人员
  • 学生
  • 安卓*
  • Microsoft Windows* 8
  • 主题专区: 

    HTML5
  • 一劳永逸编码
  • 包括在 RSS 中: 

    1
  • 入门级
  • 中级
  • Intel(R) System Studio Developer Story : With XDB and Minnow board how to debug exception errors in the Android-Linux-Kernel.

    $
    0
    0

     

    Intel(R) System Studio Developer Story : With XDB and Minnow board, how to debug exception errors in the Android-Linux-Kernel.

      In this article, we can see how to debug and check the exception error in Android Linux Kernel in Intel ® X86 system with XDB JTAG debugger which is a part of tool Intel System Studio ® Tool Suite. In doing so, we are supposed to see what is the JTAG and XDB and some information of the exception handling of Intel ® X86 architecture as well.

      1. JTAG overview

      JTAG stands for Joint Test Action Group and is pronounced to jay-tag but, which is normally meaning IEEE std 1149.1-1990 IEEE Stadard Test Access Port and Boundary-Scan Architecture. This standard is to do debug and test SoC (System On Chip) and Microprocessor Software.

      The configuration of a JTAG debugging is consist of three parts ; Debugger Software in a host machine, JTAG adapter and On chip debug(OCD) in SoC. 

      1.1 Debugger SW

      It is getting addresses and data from JTAG adapter and showing it to user and user can send data and address to JTAG adapter via USB as vice versa.  By using this tool, user can run control and do source line debug with loading symbol of the image which is downloaded to target system such as run, stop, step into, step over, set break point. And an accessing memory is possible as well. So user can easily do debugging the SW of target system and inspect system memory and registers. XDB is a host side debugger SW in Intel system Studio.

      1.2 JTAG Adapter (Probe)

     JTAG adapter is the HW box which converts JTAG signals to PC connectivity signals such as USB, parallel, RS-232, Ethernet. USB is most popular one and many of adapter is using the USB as a connection to host PC. While target side interface has many variation nevertheless there is minimal standard JTAG pin numbers, e.g. ARM 10-pin, ST 14-pin, OCDS 16-pin, ARM 20-pin. For XDB and Minnow Max configurations which is used in this article has 60-pin connection with a target. ITP-XDP3 (a.k.a. Intel Blue Box) is used for  JTAG adapter of Minnow debugging. XDB is also compatible with some other JTAG debugger such as Macraigor® Systems usb2Demon® , OpenOCD.

      1.3 On Chip Debug (Target SoC)

      The main component of OCD is TAP (Test Access Point) and TDI(Test Data In) / TDO(Test Data Out). By using TAP we can reset or read/write register and bypass and with TDI/TDO we can do Boundary Scan (Click for more details and picture).

    < Figure 1-1> Configuration of JTAG probe and target system - Lure is the small pin adapter for ITP-XDP3 and Minnow Board.

     

     

      2. Overview of  Exception in Intel Architecture

      An exception is a synchronous event that is generated when the processor detects one or more predefined 
    conditions while executing an instruction. The IA-32 architecture specifies three classes of exceptions: faults, 
    traps, and aborts. Normally faults and traps are recoverable while abort does not allow a restart of the program. When there is exception, it is processed as same way as interrupt handling. Which means that after halting and save current process then system switches to the exception handler and comes back again once an exception handling is done. 

     < Table 2-1 > Protected-Mode Exceptions and Interrupts 

     

     3. Prepare the Minnow board and ITP-XDP3 with a host PC connection via USB

     You need to set up Minnow board with Android OS. For this, please see the "Intel(R) System Studio Developer Story : How to configure, build and profile the Linux Kernel of Android by using the VTune" article (Please click the link). It has the introduction of Minnow board and how to set up / build / download Android OS in Minnow boards. 

     Connect Minnow board with the lure (which is small PCB with 60 pin JTAG connector) to ITP-XDP3 JTAG probe and ITP-XDP3 to a host PC via USB which has already been installed Intel System Studio first for USB driver of ITP-XDP3. You can check the device manager of your Windows host if the USB driver of XDP3 is installed correctly. And finally, run the XDB.

    <Figure 3-1> Connections of Minnow target board, ITP-XDP3 JTAG probe and XDB on the host PC.

     4. Using a XDB for exceptions of Android Kernel on the IA (Minnow board).

      We see the step by step procedure of using XDB to check and debug the exception in a Kernel.

    (1) Run XDB : Go to the Installed directory and run the batch file. (e.g. start_xdb_legacy_products.bat).

    (2) Connect to the target : Go to the XDB menu - File - Connect and select ITP-XDP3 and Z3680, Z37xx.

         

    (3) Load the symbol files and set the directory of source files. Go to the XDB menu - File - Load / Unload Symbol and set the symbol files. Per source files, go to the XDB menu - Options - Source Directories and set the rule and directories. Rule is to adjust files directory between current source path and path in the symbol file which recorded in compile time.

    (4) Browse to the entry file which has exception handler : XDB menu - View - Source files and open the entry_64.S file.

    (5) Set break point in the exception entry point : Go and find the ENTRY(error_entry) which is entry point of exception with an error code in rax register. And each exception handler is defined as zeroentry or errorentry macros, so you can set break point in the error_entry or some specific handler. In this article, we are using the "zeroentry invalid_op do_invalid_op" for testing.

    ENTRY(error_entry)
    	XCPT_FRAME
    	CFI_ADJUST_CFA_OFFSET 15*8
    	/* oldrax contains error code */
    	cld
    	movq_cfi rdi, RDI+8
    	movq_cfi rsi, RSI+8
    	movq_cfi rdx, RDX+8
    	movq_cfi rcx, RCX+8
    	movq_cfi rax, RAX+8
    	movq_cfi  r8,  R8+8
    	movq_cfi  r9,  R9+8
    	movq_cfi r10, R10+8
    	movq_cfi r11, R11+8
    	movq_cfi rbx, RBX+8
    	movq_cfi rbp, RBP+8
    	movq_cfi r12, R12+8
    	movq_cfi r13, R13+8
    	movq_cfi r14, R14+8
    	movq_cfi r15, R15+8
    	xorl %ebx,%ebx
    	testl $3,CS+8(%rsp)
    	je error_kernelspace
    error_swapgs:
    	SWAPGS
    error_sti:
    	TRACE_IRQS_OFF
    	ret
    
    zeroentry divide_error do_divide_error
    zeroentry overflow do_overflow
    zeroentry bounds do_bounds
    zeroentry invalid_op do_invalid_op
    zeroentry device_not_available do_device_not_available
    paranoiderrorentry double_fault do_double_fault
    zeroentry coprocessor_segment_overrun do_coprocessor_segment_overrun
    errorentry invalid_TSS do_invalid_TSS
    errorentry segment_not_present do_segment_not_present
    zeroentry spurious_interrupt_bug do_spurious_interrupt_bug
    zeroentry coprocessor_error do_coprocessor_error
    errorentry alignment_check do_alignment_check
    zeroentry simd_coprocessor_error do_simd_coprocessor_error
    

    (6) Examples : make an exception and check if the handler got it when we set break point : Set break point to the "zeroentry invalid_op do_invalid_op" and call the BUG() which makes the "Invalid Opcode" fault by ud2 instruction.

    #define BUG()							\
    do {								\
    	asm volatile("ud2");					\
    	unreachable();						\
    } while (0)

    < Call the BUG() >

    < Stop at the Invalid_op of break point >

    5. Conclusion 

     Some exceptions are critical error of system HW and SW, so it is important what / why / where these kind of exceptions occur. By using XDB, you can easily check it and can do more investigation of these issues. Because XDB provide power features like easily accessing the assembly code and source code and checking the call stack and registers.

    6. References 

    Intel® 64 and IA-32 Architectures Software Developer’s Manual

    jtag 101 ieee 1149.x and software debug

     

  • minnow
  • MinnowMax
  • xdb
  • JTAG
  • debug
  • exception handling
  • x86
  • Android
  • Linux
  • kernel
  • panic
  • bug fixing
  • Embedded debugger
  • embedded system debugging, joint test action group
  • embedded system debugging
  • 开发人员
  • 英特尔 AppUp® 开发人员
  • 学生
  • Android*
  • Linux*
  • Unix*
  • 安卓*
  • 物联网
  • C/C++
  • 高级
  • 中级
  • Intel® JTAG Debugger
  • 英特尔® 系统序调试程序
  • 嵌入式产品
  • 英特尔® System Studio
  • 调试
  • 开发工具
  • 英特尔® 凌动™ 处理器
  • 英特尔® 酷睿™ 处理器
  • 英特尔® 奔腾® 处理器
  • 物联网
  • 开源
  • 嵌入式
  • 平板电脑
  • URL
  • 错误检查
  • 主题专区: 

    IDZone
  • 安卓*

  • Using Intel® HAXM for Developing Android* Wear and TV Apps

    $
    0
    0

    Android* has come a long way initially starting with phones, then tablets, Google TV*, Android Wear*, Android TV* (replaces Google TV), and Android Auto*. It can be challenging for developers to build and test their apps to run on all these device types. Add to this different device form factors and display resolutions, and it can quickly become a complex app verification and testing problem. We have Intel® HAXM to the rescue.

    Intel® Hardware Accelerated Execution Manager (HAXM) is a hardware-assisted Android emulator with low-overhead, excellent performance, and low latency. You can learn more about it here: https://software.intel.com/en-us/android/articles/intel-hardware-accelerated-execution-manager

    With Intel HAXM, developers can have multiple Android emulator instances running on their development system without having to worry too much about performance, load or latency issues. This can be very helpful in the iterative process of app development and testing, resulting in huge developer productivity.

    Non-x86 Android emulator images can have slow start-up time and sluggish UI responsiveness. Unlike some third-party Android emulators, with Intel HAXM you can use all the latest Android API versions and platforms as soon as they are released.

    For detailed instructions on using Intel HAXM please see https://software.intel.com/en-us/android/articles/speeding-up-the-android-emulator-on-intel-architecture

    In this blog post we will look at how developers can take advantage of the Intel HAXM emulator when developing a universal Android app that targets different Android platforms like Android Wear and TV, and device variations.

    Using the Universal Android Sample App

    Google recently released a sample universal app to show how developers can target multiple form factors with the same code base. Please see the following link to learn more: https://github.com/googlesamples/android-UniversalMusicPlayer

    This sample app showcases some of the best practices for targeting multiple form factors with the same code base. Follow the instructions in the above link to build the app. We will be using it to load x86 HAXM Emulator instances for TV, Wear and Phone in this article.

    The project can be directly imported into Android Studio* and developers can take advantage of the integrated emulator features. If you prefer to use other IDEs, the following can be helpful.

    If you are comfortable with cmd line, just invoke the gradle build script from sample source directory.

    gradlew assembleDebug

    The apk will be available at “mobile/build/outputs/apk/mobile-debug.apk” folder.

    Create the AVDs for Android TV and Wear

    We need to ensure we downloaded the latest Android SDK emulator images for TV and Wear, along with the standard Android image for phone/tablet.

    Open the android SDK manager. You can invoke from cmd line (<Android-SDK>/tools folder should be in your path):

    > android

    Android SDK Manager

    Next, we need to create the emulator configurations (AVDs) to use the above images.

    Open the Android Virtual Device manager. You can invoke from cmd line

    > android avd

    Android Virtual Device (AVD) Manager

    Android Wear Emulation

    Create an Android Wear AVD configuration as shown.

    Android Wear AVD Configuration

    Click “OK”, and start the Wear emulator by clicking “Start…” in the AVD Manager Window. A screenshot of the Wear emulator after first boot, is shown below.

         Android Wear Emulator

    Android Wear requires a companion app to be installed on your phone. This app is only available on the Google Play store, so it requires us to have a device with access to the store.

    https://play.google.com/store/apps/details?id=com.google.android.wearable.app

    We need the device configured with adb debugging, and both Wear emulator and the device show up in adb devices list:

        Android adb devices

    Finally, we need to forward tcp ports with,

    > adb -d forward tcp:5601 tcp:5601

    The Wear emulator should now be able to connect to your device. The below screenshots show the Wear emulator in connected and disconnected states.

           Android Wear Emulator Connected and Disconnected state

    Detailed instructions for creating Wear apps can be found at: http://developer.android.com/training/wearables/apps/creating.html

    Like any android apk, we can manually install our sample app apk on the Wear emulator using the adb,

    > adb -s emulator-5554 install -r mobile\build\outputs\apk\mobile-debug.apk

    Verify that it is in fact installed and available on the Wear emulator using,

    > adb -s emulator-5554 shell pm list packages | grep example

    The package name for the sample app, com.example.android.uamp is listed.

    We can even manually run the sample app on the Wear emulator using,

    > adb -s emulator-5554 shell monkey -p com.example.android.uamp -c android.intent.category.LAUNCHER 1

    We now have the sample app running on the Wear emulator device.

     

    Android TV Emulation

    Create an Android TV emulator configuration (AVD) as shown.

        Android TV AVD Configuration

    Click “OK”, and start the TV emulator by clicking “Start…” in the AVD Manager Window.

    We can verify if the emulator is accessible from adb using

        > adb devices

    Note down the emulator id (eg: emulator-55xx), which you can use as the target for adb commands. Install the apk using

    > adb -s emulator-55xx install -r mobile\build\outputs\apk\mobile-debug.apk

    Finally, start the app on the Android TV emulator instance using,

    > adb -s emulator-55xx shell monkey -p com.example.android.uamp -c android.intent.category.LAUNCHER 1

    The sample app running on the Android TV emulator instance:

        

    Developers can create and start as many emulator configurations/instances as needed.

    Intel HAXM can be configured with appropriate memory size at installation time.

    The below screenshot shows the Wear, TV and phone AVD configurations.

        

    Here is the universal sample app running on all 3 (TV, Phone, and Wear) along with their CPU utilizations (notice the low CPU overhead):

        

    Developers can tweak memory allocation for further optimization. We have barely scratched the surface of emulator features in this article. Please refer to http://developer.android.com/tools/help/emulator.html for all the available config options.

    References

    *Other names and brands may be claimed as the property of others

  • #android #haxm #androidwear #androidtv
  • 图标图像: 

  • 调试
  • 开发工具
  • 游戏开发
  • 英特尔® 凌动™ 处理器
  • 移动性
  • 安卓*
  • 嵌入式
  • 电话
  • 平板电脑
  • 开发人员
  • 安卓*
  • 主题专区: 

    IDZone

    包括在 RSS 中: 

    1
  • 入门级
  • 中级
  • Advanced Computer Concepts For The (Not So) Common Chef: Terminology Pt 1

    $
    0
    0

    Before we start, I will use the next two blogs to clear up some terminology. If you are familiar with these concepts, I give you permission to jump to the next section.  I suggest any software readers still check out the other blog about threads. There is a lot of confusion, even among us software professionals.

    We will first look at what a processor, CPU, core and package are. The general media, meaning TV and the like, use these terms pretty loosely. Then we will look at threads, specifically the differences between hardware and software threads. The distinction between these different types of threads is confusing, even to the computer programmer.

    THE CORE? CPU? PACKAGE? SILICON? HUH?

    Let us look at the left hand side of Figure CPU. Back in the Pentium® days, people referred to the component of a computer that executes a program’s instructions (i.e. the brains of a computer) as either the ‘CPU’ or ‘processor’. There really was not a distinction between the two. The ‘computer chip’ was the silicon upon which an integrated circuit was etched, e.g. our CPU. The ‘package’ was the stylish plastic and metal case that wrapped and protected the silicon, and from which the multitude of pins/connections protruded.

    In today’s world, we have processors with multiple CPUs that run multiple threads each, along with multiple chips (silicon) in the same package. Terminology has been updated to reflect this modern world. Look at the right hand side of Figure CPU. What was once a CPU we now call a ‘core’. A processor can contain many cores on the same piece of silicon; a modern laptop now typically contains 2 cores in its processor; a desktop can contain 4 to 6 cores; and a server can contain upwards of 18 cores per processor. The package can now hold not just one silicon integrated circuit but several. It contains the processor silicon, of course. It might also hold flash memory, other specialized processors, and more.

    Pentium cores vs Xeon multi-core

                                           1995                                                                    2015

    Figure CPU: Processors then and now.

    Let us look at Figure SILICON. On the left is the original Pentium circa 1993. On the right is the current generation Intel® Xeon® processor E5-2600 v3 circa 2013. The Pentium processor on the left has one CPU on one silicon chip in a package. The Xeon processor on the right has 18 cores on one silicon chip, each core equivalent to one (very fast and enhanced) old style Pentium CPU. (Can you locate each of the cores?)

    Pentium processor circa 1993

    Image of Xeon E5-2600 siliconImage of Pentium die relative to Xeon E3

    Figure SILICON. Pentium vs Xeon E5-2600 v3+

    My point is that in the blogs that follow, when talking about the ‘processor’, I refer to the hunk of silicon that contains all the cores and their support circuitry. By a core, I refer to a single processing unit that does computation (formerly known of as a CPU) of which there can be many such units (and each of which can execute two or more threads simultaneously). And by package, I refer to the flat rectangular, metal and plastic container that can hold multiple special purpose processors, memory and other supporting circuitry, each on separate chips of silicon.

    Now that we have that settled, in my next installment, we look at something that confuses programmers perhaps more than it does everyone else.

    NEXT: OF COURSE, I KNOW WHAT A THREAD IS….DON’T I?

    +Just for grins, to the right of the Xeon processor in Figure SILICON, I scaled the Pentium (800nm) to show how large it would be using today’s manufacturing technology (22nm). This is a very rough representation as the size varies depending upon whether you go by # of transistors (1.4 billion / 7.5 million = x187) or feature size ((800nm)^2 / (22nm)^2 = x1322). What is shown is the more conservative x187. Yes, I know that I am not factoring in the actual die size.

     

  • Parallel Programming
  • Taylor Kidd
  • Intel Xeon Phi Coprocessor
  • MIC
  • Knights Corner
  • Knights Landing
  • manycore
  • Many Core
  • KNC
  • KNL
  • 图标图像: 

  • 学术
  • 教育
  • 英特尔® 酷睿™ 处理器
  • Intel® Many Integrated Core Architecture
  • Microsoft Windows* 8 Desktop
  • 并行计算
  • 线程
  • 企业客户端
  • 服务器
  • Windows*
  • 笔记本电脑
  • 服务器
  • 平板电脑
  • 桌面
  • 开发人员
  • 教授
  • 学生
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Unix*
  • 主题专区: 

    IDZone

    包括在 RSS 中: 

    1
  • 入门级
  • 中级
  • Ejemplo de código de Intel® RealSense™: "Sketch"

    $
    0
    0

    Descargar el ejemplo de código Sketch

    Sinopsis

    En este ejemplo de código se utiliza el SDK de Intel® RealSense™ para Windows con el fin de crear una sencilla aplicación de dibujo virtual llamada Sketch. Esta aplicación de escritorio de Windows, desarrollada en C#/WPF, demuestra varias de las capacidades de seguimiento de manos y reconocimiento de gestos del SDK de Intel RealSense:

    • Obtención de coordenadas (globales) x-y-z de articulaciones de manos
    • Captura selectiva de datos de gestos
    • Adquisición de información de estado de alertas (es decir, detección de manos, calibración y estado de límites)

    (Nota: para que la funcionalidad de esta aplicación de ejemplo sea total, se debe contar con una cámara 3D frontal).

    MIRE aquí el resumen en video de Sketch.  

    Introducción a Sketch

    Sketch es una aplicación de dibujo simple que permite al usuario simular que dibuja en un lienzo por medio de gestos y movimientos de las manos. En la Figura 1 se muestra la interfaz de usuario de Sketch (desarrollada en WPF/XAML).


    Figura 1. Interfaz de usuario de Sketch

    Hay habilitados tres gestos (que se representan en la pantalla con la acción que los acompaña) para interactuar con el lienzo virtual:

    • Pellizco (“Dibujar”): hace que el cursor se vuelva sólido y dibuje una línea sobre el lienzo. La posición del cursor en el lienzo es controlada por las coordenadas x e y de la punta del dedo mayor del usuario. El grosor de la línea lo controla el eje z de la punta del dedo mayor del usuario (si se la aleja de la cámara, la línea se vuelve más delgada, como si se redujera la presión sobre un lápiz o un pincel).
    • Separar los dedos (“Desplazarse”): desactiva el lápiz y hace que el cursor se convierta en un círculo vacío. Permite al lápiz ir a otras partes del lienzo sin dibujar ninguna línea. También permite elegir colores de la paleta situada a la derecha, simplemente con desplazarse sobre ellos.
    • Saludo (“Borrar”): limpia el lienzo de dibujo y lo deja listo para volver a dibujar.

    Detalles

    La aplicación Sketch simula el acto de dibujar sobre un lienzo cuando el usuario hace el gesto "two_fingers_pinch_open". Se eligió este gesto porque se aproxima a la postura que tendría la mano si sostuviera un lápiz o un pincel. Es el gesto que se muestra en la Figura 2.


    Figura 2.Gesto para dibujar

    Para determinar la posición del lápiz y el grosor del trazo, se hace un seguimiento de la punta del dedo mayor del usuario, lo cual podría parecer antiintuitivo si consideramos que el gesto para dibujar es el pellizco. La razón por la cual se sigue el dedo mayor es para evitar posibles oclusiones cuando se presiona el pulgar contra el índice. Cuando se sigue el dedo mayor en lugar del índice o el pulgar, se obtiene un mejor rendimiento.

    La aplicación Sketch también demuestra cómo adquirir y mostrar información de estado de alerta de manos (en este caso, detección de manos, calibración y estado de límites). Este tipo de comunicación de respuesta ayuda a los usuarios, de una forma o de otra, a posicionar las manos correctamente frente a la cámara. Aunque la presentación de esta información es simplista en esta aplicación de ejemplo, alentamos a los desarrolladores a proporcionar indicaciones similares para mejorar la experiencia general del usuario.

    A descargarla

    Para experimentar con esta aplicación y conocer más sobre su funcionamiento, descárguela de aquí.

    Acerca de la tecnología Intel® RealSense™

    Para dar los primeros pasos y conocer más del SDK de Intel RealSense para Windows, vaya a https://software.intel.com/en-us/realsense/intel-realsense-sdk-for-windows.

    Acerca del autor

    Bryan Brown es ingeniero de aplicaciones de software de la División de Relaciones con los Desarrolladores de Intel. Su experiencia profesional es variada: en ingeniería de software, electrónica y de diseño de sistemas. Sus intereses técnicos se centran en aplicaciones de interacción natural y tecnologías de interfaz cerebro-computadora; participa activamente en varios programas de desarrollo alfa relacionados con diversas tecnologías emergentes en estas disciplinas.

  • sketch
  • Hand tracking
  • Gesture Recognition
  • finger pinch
  • Intel® RealSense™ Technology
  • Intel® RealSense™
  • Intel RealSense. RealSense SDK
  • 开发人员
  • Microsoft Windows* 8
  • 英特尔® 实感™ 技术
  • Windows*
  • C#
  • 中级
  • 英特尔® 实感™ SDK
  • 英特尔® 实感™ 技术
  • 前置 F200 照相机
  • 笔记本电脑
  • 平板电脑
  • URL
  • 主题专区: 

    IDZone

    Ejemplo de código de Intel® RealSense™: "Blockhead"

    $
    0
    0

    Descargar el ejemplo de código Blockhead

    Sinopsis

    Este ejemplo de código muestra cómo se usa el SDK de Intel® RealSense™ para Windows*, en una aplicación de escritorio C#/WPF. Se trata de una aplicación simple, llamada BlockHead, en la que se utilizan tres características interesantes del SDK para Intel RealSense:

    • Captura y muestra la transmisión en color de la cámara RGB.
    • Recupera datos de ubicación del rostro y postura aproximada de la cabeza.
    • Recupera y evalúa datos de expresión facial.

    (Nota: para que la funcionalidad de esta aplicación de ejemplo sea total, se debe contar con una cámara 3D frontal).

    MIRE aquí el resumen en video de BlockHead.

    Introducción a Blockhead

    Como se muestra en la Figura 1, la aplicación muestra la transmisión en color en un control de imagen WPF y superpone en tiempo real una imagen animada sobre el rostro real del usuario. 

    Superimposed cartoon image
    Figura 1.Imagen animada superpuesta sobre el rostro del usuario

    La imagen animada se manipula programáticamente en tiempo real a partir de datos adquiridos del SDK.

    • Modifica el tamaño para que coincida con el del rostro del usuario (se achica y agranda cuando el usuario se aleja de la cámara o se acerca) sobre la base de información del rectángulo de la cara.
    • Gira a la izquierda y a la derecha como respuesta a la orientación de la cabeza del usuario (alabeo).
    • Intercambia el contenido de control de la imagen a partir de la adquisición y la calificación de datos de expresión (ver Figura 2).

    Expressions Detected in Real Time
    Figura 2.Expresiones de sonrisa, lengua afuera, beso y boca abierta detectadas en tiempo real

    Detalles

    Para esta sencilla aplicación de ejemplo, los gráficos se crearon en un programa de dibujo y se capturaron como archivos de gráficos de red portátiles (.png). Estas imágenes se podrían reemplazar fácilmente con transparencias representadas artísticamente, o incluso con capturas de pantalla de amigos, caricaturas, etc., con el fin de lograr un efecto visual más atractivo.

    Se aplican diferentes “transforms” (p. ej., ScaleTransform, RotateTransform) al objeto de imagen para posicionarlo en respuesta a entradas de seguimiento de cabeza del SDK de Intel RealSense. Estas entradas pueden ser ubicación del rostro, cálculo de postura y datos de reconocimiento de expresiones.

    El SDK es capaz de capturar alrededor de 20 expresiones distintas que luego se pueden evaluar en una aplicación. Esta en particular se centra en las expresiones de la boca: EXPRESSION_KISS, EXPRESSION_MOUTH_OPEN, EXPRESSION_SMILE y EXPRESSION_TONGUE_OUT. Sin embargo, podría extenderse con facilidad al uso de información de expresiones de las cejas, los ojos y la cabeza.

    A descargarla

    Para aprender más sobre esta aplicación, estudiar el código y extenderlo a casos más interesantes que aprovechen el SDK de Intel RealSense, descárguela desde aquí.

    Para dar los primeros pasos y conocer más del SDK de Intel RealSense para Windows, vaya a https://software.intel.com/en-us/realsense/intel-realsense-sdk-for-windows.

    Acerca del autor

    Bryan Brown es ingeniero de aplicaciones de software de la División de Relaciones con los Desarrolladores de Intel. Su experiencia profesional es variada: en ingeniería de software, electrónica y de diseño de sistemas. Sus intereses técnicos se centran en aplicaciones de interacción natural y tecnologías de interfaz cerebro-computadora; participa activamente en varios programas de desarrollo alfa relacionados con diversas tecnologías emergentes en estas disciplinas.

  • Blockhead
  • face location
  • face tracking
  • Facial analysis
  • Intel® RealSense™ Technology
  • Intel® RealSense™
  • Intel RealSense
  • RealSense SDK
  • 开发人员
  • Microsoft Windows* 8
  • 英特尔® 实感™ 技术
  • Windows*
  • C#
  • 中级
  • 英特尔® 实感™ SDK
  • 英特尔® 实感™ 技术
  • 前置 F200 照相机
  • 笔记本电脑
  • 平板电脑
  • URL
  • 主题专区: 

    RealSense
  • 英特尔® 实感™ 技术
  • Meshcentral - Mesh Agent v193 + Java API's and samples

    $
    0
    0

    We regularly update the Mesh Agent with many new features, but this week Bryan Roe had quite an impact with a complete suite of new features that are being rolled out in Mesh Agent v193. For people using Meshcentral.com or for many running their own server, the agent update is automatic. The new agent provides added browser compatibility, features and security. It’s all the more impressive when you know that the agent is being released on so many platforms: Windows XP, Windows IT, Linux, OSX…  In addition, Bryan Roe also opened up a completely new avenue for developers with new Mesh Agent API Java library. So, let’s break it all down. First, the what is new in Mesh Agent v193:

    • Latest WebRTC Microstack. The latest mesh agent has a significant upgrade of its WebRTC stack. The stack had not changed in almost a year and now, thanks for Bryan Roe’s work. The stack has better performance due to larger window size, better packet drop recovery, round trip time calculation and much more. The new stack can also both receive and initiate WebRTC and has TURN support, but the mesh agent does not use these two features yet. The new stack does allow the agent much more flexibility in what we can do and support moving forward.
    • Microstack Websocket support. The Mesh Agent tiny web server now has web socket support. In the past we used WebRTC for traffic between the browser and the mesh agent local web site (HTTPS port 16990). This works well, but work only on WebRTC compatible browsers. Now, we moved the local site to use web and get IE and Safari browser compatibility.
    • Microstack HTTP digest support. The tiny web server added HTTP digest support and we changed the local web site to use this system for authentication. This technique of authentication is a bit more secure since the browser, not the web application, gets to handle the password.
    • OpenSSL 1.0.2 branch. In this version of the agent, we switched to the latest OpenSSL branch. We are now using the latest OpenSSL 1.0.2a and will continue to follow the 1.0.2 branch moving forward. This also makes the agent support DTLS 1.2 which is used for WebRTC. The agent’s uses of the latest OpenSSL fixes for the latest vulnerabilities identified last week.

    The new mesh agent is pretty amazing and it’s being releasing a many platforms all at once. It’s not all, Bryan Roe also released a new Java library for interacting with the Mesh Agent to do peer-to-peer messaging and application data storage. The new library comes with two samples applications, one with a GUI and one text-only. It’s all part of the latest Mesh Agent API package, available on info.meshcentral.com. This latest package is specifically targeted at IoT usages, where you can now have peer-to-peer discovery and messaging fully and automatically enabled. This continues the tradition of making Meshcentral an outstanding solution for embedded and IoT usages.

    Questions and feedback appreciated,
    Ylian Saint-Hilaire
    info.meshcentral.com
    Meshcentral on Twitter

    The latest Mesh Agent v193 has HTTP digest authentication and websocket support. So the local
    web site on HTTPS port 16990 is more secure and compatible with more browsers.

    The Mesh Agent v193 has many improvements over the previous versions. It’s all the more impressive
    when you know it runs on so many platforms: Windows XP, Windows IT, Linux, OSX, Android…

     

    The all new Mesh Agent API Java Library allows developers to quickly build Java applications that make
    use of the Mesh Agent’s peer-to-peer capability and application data storage system.

     

  • Mesh
  • MeshCentral
  • MeshCentral.com
  • WebRTC
  • WebSocket
  • DigestAuth
  • http
  • HTTPS
  • java
  • Mesh Agent API
  • 图标图像: 

  • 新闻
  • 开发工具
  • 物联网
  • 开源
  • 安全
  • HTML5
  • 物联网
  • Windows*
  • 嵌入式
  • 笔记本电脑
  • 电话
  • 服务器
  • 平板电脑
  • 桌面
  • 开发人员
  • 合作伙伴
  • 教授
  • 学生
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • 主题专区: 

    IDZone

    包括在 RSS 中: 

    1
  • 高级
  • 入门级
  • 中级
  • Viewing all 500 articles
    Browse latest View live




    Latest Images