Oculus_Rift_Development_Kit_2_-_YouTube

Oculus Rift Dev Kit 2: More Simulation Promise

There’s no shortage of people eagerly following the development of the Oculus Rift, and I’m definitely one of them given I see the Oculus as central to my PhD research. In case you hadn’t heard, the second version of the Oculus Developer Kit is slated for a July 2014 release. It contains a bunch of improvements, not least of which is a much nicer looking unit.

Have a look for yourself:

The position tracking improvements are particularly exciting from a clinical simulation viewpoint, as it the improved quality of what you’re seeing. To now be able to lean forward / backward / sideways and have it reflected fully means that a lot of body language cues can now be recreated virtually. Add in significant body language options and you have a huge improvement to reality of the scenario.

Now that the Oculus crew are owned by Facebook, they’re thinking pretty big, claiming they hope to create a billion-user VR MMO. There’s been some big claims in this space before, but this one’s up there.

Diving In

maze-smhIt’s been a positive 12 months in regard to my research – although none of the actual data collection has started and it’s probably a while off yet before it does.  That said, I made some serious inroads into understand the body of literature surrounding virtual environments and clinical simulation.

I’ve also submitted my first journal article on the topic – it’s a ‘state of play’ article with a focus on what needs to occur to convince educators that virtual environments can be a key component in their simulation arsenal. The article reviewers have given some great feedback and the final version should be submitted in the next week. Then it’s on to getting the first draft of my lit review structured and written.

It’s been a rewarding year on two other fronts. First, I have three great supervisors who are very committed to the research area and have been a huge help so far. Second, I’ve had some great chats with the developer who’ll be helping me build the simulation. We’re actually setting up a site to record our journey, which I’ll obviously flag here when it’s ready to go.

For those of you still beavering away in the field – I hope it’s a rewarding year for you!

[Image via smh.com.au]

 

Space Glasses: Merging Physical and Virtual

Sorry for the lack of updates, but my studies have been moving along fairly well and I’m getting closer to building the simulation I’ve been thinking about. One aspect of it may involve technology like SpaceGlasses. Have a look at the video and you’ll see pretty immediately what it’s application may be to a virtual world-based simulation:

I have no doubt the examples shown there are buffed up for the video, but I’m hopeful the SpaceGlasses will deliver a further tool in the consumer-priced virtual reality arsenal.

Release date is scheduled for January 2014 so we’ll soon see.

The Omni: Another Piece In The Consumer VR Puzzle

I have to say, it doesn’t get much more exciting in this field than what’s occurring from an equipment viewpoint over the next year. I’ve talked about the touchless Leap Motion interface already (it’s due to be released pretty soon) and then there’s the Oculus Rift as arguably the first high quality consumer VR headset. Both of these are likely to form part of my PhD research, but there’s a third piece of the puzzle that while unlikely to be used in my research, will fulfil a long term desire in regards to gaming.

It’s called the Omni and it looks like it’s going to bring an affordable human movement option. I’ve always loved the idea of being able to get fit from gaming, and the Omni may just pull that off if it achieves what it’s planning to.

 

Have a look for yourself and note that the headset is the Oculus Rift, not a component of the Omni:

The team are still currently raising funds – they were seeking 150K and close to 900K has been pledged, so there’s no shortage of interest. Check out the full details including some further videos here.

Latency and Virtual Reality – The Deeper Stuff

John Carmack created this, and set me on the road to gaming obsession

John Carmack created this, and set me on the road to gaming obsession

John Carmack is a bit of an icon in gaming circles, and he’s also one of the people that’s supporting the Oculus VR consumer headset that’s on the near horizon. I’d very stupidly assumed (having not read any biographical details on him until today) that he wasn’t that deep into the coding / science of things like this.

He’s just posted a nice piece of work on the challenges of latency in virtual reality. If you’re from a computer science background you’ll get a lot more out of it than I did, and even I could appreciate just how critical latency is in this sphere.

Latency is of course an important consideration anywhere but Carmack shows just how far we probably have to go to make VR headsets that give an accurate perception of real-time movement in physical space. It’ll happen of course – and I still want an Oculus now.

Papertab: Washable Computing?

papertabAn amazing piece of development by a public/private consortium. It’s an obvious evolution from tablet computing and although it obviously has a long way to go, the implications for virtual worlds-based education is obvious, particularly in the health field where hardy devices are needed that can easily be cleaned.

In 5 to 10 years I can see options like this achieving exactly that. If the price point ends up being reasonable, it’s also a great option at the pre-registration training level. Taking purely a virtual worlds perspective, if I can load up a good immersive simulation on something like the Papertab, hand it out to participants at the beginning of a session, walk them through the simulation and allow them to walk out with it, I have to be be making some inroads. That’s without the collaboration and interaction options already being shown between each Papertab. Of course, it’ll depend on how successfully the developers can move from the gray-scale version on offer now, but I’d imagine that’s a given over a period of years.

Have a look for yourself:

The full press release:

Cambridge, UK and Kingston, Canada – January 7, 2013 — Watch out tablet lovers — A flexible paper computer developed at Queen’s University in collaboration with Plastic Logic and Intel Labs will revolutionize the way people work with tablets and computers. The PaperTab tablet looks and feels just like a sheet of paper. However, it is fully interactive with a flexible, high-resolution 10.7″ plastic display developed by Plastic Logic, a flexible touchscreen, and powered by the second generation Intel® Core i5 processor. Instead of using several apps or windows on a single display, users have ten or more interactive displays or “papertabs”: one per app in use.

Ryan Brotman, research scientist at Intel elaborates “We are actively exploring disruptive user experiences. the ‘PaperTab’ project, developed by the Human Media Lab at Queen’s University and Plastic Logic, demonstrates innovative interactions powered by Intel core processors that could potentially delight tablet users in the future.”

“Using several PaperTabs makes it much easier to work with multiple documents,” says Roel Vertegaal, director of Queen’s University’s Human Media Lab. “Within five to ten years, most computers, from ultra-notebooks to tablets, will look and feel just like these sheets of printed color paper.”

“Plastic Logic’s flexible plastic displays are completely transformational in terms of product interaction. they allow a natural human interaction with electronic paper, being lighter, thinner and more robust compared with today’s standard glass-based displays. this is just one example of the innovative revolutionary design approaches enabled by flexible displays.” explains Indro Mukerjee, CEO of Plastic Logic.

What’s your take on this?

[via Stuff.tv]

2012 – Slow Progress

Just a quick note to say thank you for your ongoing support of this site. Updates have been much less frequent but for all the right reasons i.e. my PhD studies in the area are progressing, albeit slowly.

Most of 2012 has been spent getting a grasp of the literature around virtual worlds and clinical simulation. 2013 should see the full study design fleshed out and hopefully acceptance of my first journal paper.

Hopefully I’ll be updating this blog a little more often as I devote even more time to the area in the coming year.

Virtual Patient for Psychological Interventions

With thanks to Evelyn McElhinney, a fascinating piece on the use of virtual patients to mimic psychological disorder symptoms. This to me is some of the most critical aspects of virtual patients: the ability for nuanced psychological presence. It’s not only important for the mental health professions, but it’s also critical for truly immersive and accurate clinical simulations in a broader context. I’ll be watching this one closely.

The full press release on Dr Rizzo’s work:

New technology has led to the creation of virtual humans who can interact with therapists via a computer screen and realistically mimic the symptoms of a patient with clinical psychological disorders, according to new research presented at the American Psychological Association’s 120th Annual Convention.

“As this technology continues to improve, it will have a significant impact on how clinical training is conducted in psychology and medicine,” said psychologist and virtual reality technology expert Albert “Skip” Rizzo, PhD, who demonstrated recent advancements in virtual reality for use in psychology.

Virtual humans can now be highly interactive, artificially intelligent and capable of carrying on a conversation with real humans, according to Rizzo, a research scientist at the University of Southern California Institute for Creative Technologies. “This has set the stage for the ‘birth’ of intelligent virtual humans to be used in clinical training settings,” he said.

Rizzo showed videos of clinical psychiatry trainees engaging with virtual patients called “Justin” and “Justina.” Justin is a 16-year-old with a conduct disorder who is being forced by his family to participate in therapy. Justina, the second and more advanced iteration of this technology, is a sexual assault victim who was designed to have symptoms of post-traumatic stress disorder.

In an initial test, 15 psychiatry residents, of whom six were women, were asked to perform a 15-minute interaction with Justina. Video of one such interaction shows a resident taking an initial history by asking a variety of questions. Programmed with speech recognition software, Justina responds to the questions and the resident is able to make a preliminary diagnosis.

Rizzo’s virtual reality laboratory is working on the next generation of virtual patients using information from this and related user tests, and will further modify the characters for military clinical training, which the U.S. Department of Defense is funding, he said. Some future patients that are in development are virtual veterans with depression and suicidal thoughts, for use in training clinicians and other military personnel how to recognize the risk for suicide or violence.

In the long term, Rizzo said he hopes to create a comprehensive computer training module that has a diverse library of virtual patients with numerous “diagnoses” for use by psychiatric and psychology educators and trainees. Currently, psychology and psychiatry students are trained by role-playing with other students or their supervisors to gain experience to treat patients. They then engage in supervised on-the-job training with real patients to complete their degrees. “Unfortunately, we don’t have the luxury of live standardized ‘actor’ patients who are commonly used in medical programs, so we see this technology as offering a credible option for clinical psychology training,” he said. “What’s so useful about this technology is novice clinicians can gain exposure to the presentation of a variety of clinical conditions in a safe and effective environment before interacting with actual patients. In addition, virtual patients are more versatile and can be available anytime, anywhere. All you need is a computer.”

The press release also linked to some demonstration videos, so here they are for you:


Video_ Leap Motion hands-on | The Verge

Leap Motion: next generation motion sensing

I’ve watched the video below twice and will probably watch it a bunch more times. Leap Motion is described over at The Verge as:

The Leap uses a number of camera sensors to map out a workspace of sorts — it’s a 3D space in which you operate as you normally would, with almost none of the Kinect’s angle and distance restrictions. Currently the Leap uses VGA camera sensors, and the workspace is about three cubic feet; Holz told us that bigger, better sensors are the only thing required to make that number more like thirty feet, or three hundred. Leap’s device tracks all movement inside its force field, and is remarkably accurate, down to 0.01mm. It tracks your fingers individually, and knows the difference between your fingers and the pencil you’re holding between two of them.

Have a look for yourself and then consider answering a very obvious question below.

The obvious question with an even more obvious answer: how good would this technology be in a 3D simulation requiring demonstration of fine motor skills?