Category Archives: Clinical Simulation

Oculus Rift Dev Kit 2: More Simulation Promise

There’s no shortage of people eagerly following the development of the Oculus Rift, and I’m definitely one of them given I see the Oculus as central to my PhD research. In case you hadn’t heard, the second version of the Oculus Developer Kit is slated for a July 2014 release. It contains a bunch of improvements, not least of which is a much nicer looking unit.

Have a look for yourself:

The position tracking improvements are particularly exciting from a clinical simulation viewpoint, as it the improved quality of what you’re seeing. To now be able to lean forward / backward / sideways and have it reflected fully means that a lot of body language cues can now be recreated virtually. Add in significant body language options and you have a huge improvement to reality of the scenario.

Now that the Oculus crew are owned by Facebook, they’re thinking pretty big, claiming they hope to create a billion-user VR MMO. There’s been some big claims in this space before, but this one’s up there.

Space Glasses: Merging Physical and Virtual

Sorry for the lack of updates, but my studies have been moving along fairly well and I’m getting closer to building the simulation I’ve been thinking about. One aspect of it may involve technology like SpaceGlasses. Have a look at the video and you’ll see pretty immediately what it’s application may be to a virtual world-based simulation:

I have no doubt the examples shown there are buffed up for the video, but I’m hopeful the SpaceGlasses will deliver a further tool in the consumer-priced virtual reality arsenal.

Release date is scheduled for January 2014 so we’ll soon see.

Papertab: Washable Computing?

papertabAn amazing piece of development by a public/private consortium. It’s an obvious evolution from tablet computing and although it obviously has a long way to go, the implications for virtual worlds-based education is obvious, particularly in the health field where hardy devices are needed that can easily be cleaned.

In 5 to 10 years I can see options like this achieving exactly that. If the price point ends up being reasonable, it’s also a great option at the pre-registration training level. Taking purely a virtual worlds perspective, if I can load up a good immersive simulation on something like the Papertab, hand it out to participants at the beginning of a session, walk them through the simulation and allow them to walk out with it, I have to be be making some inroads. That’s without the collaboration and interaction options already being shown between each Papertab. Of course, it’ll depend on how successfully the developers can move from the gray-scale version on offer now, but I’d imagine that’s a given over a period of years.

Have a look for yourself:

The full press release:

Cambridge, UK and Kingston, Canada – January 7, 2013 — Watch out tablet lovers — A flexible paper computer developed at Queen’s University in collaboration with Plastic Logic and Intel Labs will revolutionize the way people work with tablets and computers. The PaperTab tablet looks and feels just like a sheet of paper. However, it is fully interactive with a flexible, high-resolution 10.7″ plastic display developed by Plastic Logic, a flexible touchscreen, and powered by the second generation Intel® Core i5 processor. Instead of using several apps or windows on a single display, users have ten or more interactive displays or “papertabs”: one per app in use.

Ryan Brotman, research scientist at Intel elaborates “We are actively exploring disruptive user experiences. the ‘PaperTab’ project, developed by the Human Media Lab at Queen’s University and Plastic Logic, demonstrates innovative interactions powered by Intel core processors that could potentially delight tablet users in the future.”

“Using several PaperTabs makes it much easier to work with multiple documents,” says Roel Vertegaal, director of Queen’s University’s Human Media Lab. “Within five to ten years, most computers, from ultra-notebooks to tablets, will look and feel just like these sheets of printed color paper.”

“Plastic Logic’s flexible plastic displays are completely transformational in terms of product interaction. they allow a natural human interaction with electronic paper, being lighter, thinner and more robust compared with today’s standard glass-based displays. this is just one example of the innovative revolutionary design approaches enabled by flexible displays.” explains Indro Mukerjee, CEO of Plastic Logic.

What’s your take on this?

[via Stuff.tv]

Virtual Patient for Psychological Interventions

With thanks to Evelyn McElhinney, a fascinating piece on the use of virtual patients to mimic psychological disorder symptoms. This to me is some of the most critical aspects of virtual patients: the ability for nuanced psychological presence. It’s not only important for the mental health professions, but it’s also critical for truly immersive and accurate clinical simulations in a broader context. I’ll be watching this one closely.

The full press release on Dr Rizzo’s work:

New technology has led to the creation of virtual humans who can interact with therapists via a computer screen and realistically mimic the symptoms of a patient with clinical psychological disorders, according to new research presented at the American Psychological Association’s 120th Annual Convention.

“As this technology continues to improve, it will have a significant impact on how clinical training is conducted in psychology and medicine,” said psychologist and virtual reality technology expert Albert “Skip” Rizzo, PhD, who demonstrated recent advancements in virtual reality for use in psychology.

Virtual humans can now be highly interactive, artificially intelligent and capable of carrying on a conversation with real humans, according to Rizzo, a research scientist at the University of Southern California Institute for Creative Technologies. “This has set the stage for the ‘birth’ of intelligent virtual humans to be used in clinical training settings,” he said.

Rizzo showed videos of clinical psychiatry trainees engaging with virtual patients called “Justin” and “Justina.” Justin is a 16-year-old with a conduct disorder who is being forced by his family to participate in therapy. Justina, the second and more advanced iteration of this technology, is a sexual assault victim who was designed to have symptoms of post-traumatic stress disorder.

In an initial test, 15 psychiatry residents, of whom six were women, were asked to perform a 15-minute interaction with Justina. Video of one such interaction shows a resident taking an initial history by asking a variety of questions. Programmed with speech recognition software, Justina responds to the questions and the resident is able to make a preliminary diagnosis.

Rizzo’s virtual reality laboratory is working on the next generation of virtual patients using information from this and related user tests, and will further modify the characters for military clinical training, which the U.S. Department of Defense is funding, he said. Some future patients that are in development are virtual veterans with depression and suicidal thoughts, for use in training clinicians and other military personnel how to recognize the risk for suicide or violence.

In the long term, Rizzo said he hopes to create a comprehensive computer training module that has a diverse library of virtual patients with numerous “diagnoses” for use by psychiatric and psychology educators and trainees. Currently, psychology and psychiatry students are trained by role-playing with other students or their supervisors to gain experience to treat patients. They then engage in supervised on-the-job training with real patients to complete their degrees. “Unfortunately, we don’t have the luxury of live standardized ‘actor’ patients who are commonly used in medical programs, so we see this technology as offering a credible option for clinical psychology training,” he said. “What’s so useful about this technology is novice clinicians can gain exposure to the presentation of a variety of clinical conditions in a safe and effective environment before interacting with actual patients. In addition, virtual patients are more versatile and can be available anytime, anywhere. All you need is a computer.”

The press release also linked to some demonstration videos, so here they are for you:


Leap Motion: next generation motion sensing

I’ve watched the video below twice and will probably watch it a bunch more times. Leap Motion is described over at The Verge as:

The Leap uses a number of camera sensors to map out a workspace of sorts — it’s a 3D space in which you operate as you normally would, with almost none of the Kinect’s angle and distance restrictions. Currently the Leap uses VGA camera sensors, and the workspace is about three cubic feet; Holz told us that bigger, better sensors are the only thing required to make that number more like thirty feet, or three hundred. Leap’s device tracks all movement inside its force field, and is remarkably accurate, down to 0.01mm. It tracks your fingers individually, and knows the difference between your fingers and the pencil you’re holding between two of them.

Have a look for yourself and then consider answering a very obvious question below.

The obvious question with an even more obvious answer: how good would this technology be in a 3D simulation requiring demonstration of fine motor skills?

Human skin rendering at its best

As you probably know, this blog is very much focused on the use of virtual worlds in health. More specifically, because of my studies I’m professionally obsessed with their use as simulation environments. Hence my excitement when I saw this rather stunning video from Jorge Jimenez.

PHd student Jorge and his supervisor Diego Gutierrez from the University of Zaragosa, have completed some stunning work on rendering human skin. I’ll leave it there so you can have a look but want to ask one question: can you imagine what you could do having such level of detail in clinical simulations?

Thanks to NWN for the heads-up.

Nice coverage of virtual worlds and nursing simulation

For those less familiar with what’s going on in the virtual worlds and clinical simulation field, here’s a nice overview from an Australian viewpoint, published in the Sydney Morning Herald today:

http://www.smh.com.au/it-pro/government-it/health-in-the-third-dimension-20120516-1ypwf.html

It’s great to see VastPark involved in the area, providing another option in a still fairly sparse marketplace. I’m seriously looking at Unity3D / Jibe as the basis for my research but I’ll also be taking a damn close look at Nursim as well.

As always, if you’re working in this area please let me know as I’d love to profile you.

SimHealth 2012

Once again apologies for the infrequent updates to this site – my PhD studies are certainly dominating my time in regards to virtual worlds, which is as it should be.

I just thought I’d give a quick heads-up that I’ll be presenting a poster on the opportunities for inter-professional clinical simulation within virtual world environments at SimHealth 2012 in Sydney in September.

If you’re going to be there, I’d love to catch up for a chat.

Jibe as simulation platform: initial thoughts

I spent some time in recent weeks with John “Pathfinder” Lester, formerly of Linden Lab and now Director of Community Development at Reaction Grid. The purpose was for a walkthrough of the Jibe platform, which is a merger of Unity3D and other technologies to make a more comprehensive learning experience. I hooked up with John for a tour as I’m actively looking for a platform on which to base my upcoming research.

To say the walkthrough was a revelation was a bit of an understatement. I’d made the decision more than a year ago that I wouldn’t be using Second Life as the platform for my proposed simulation environment. My reasons for that are numerous but it basically came down to fine detail – I’ve seen a few demos of the Unity3D engine in action and for what I’m looking for it’s a markedly superior option, even factoring in Second Life’s potential development path over the coming 2-3 years. So, I was working under the assumption of a Unity3D only option – until I checked out Jibe.

So what is it? I’ll quote the official blurb as it summarises it pretty well:

The Jibe platform is an extensible architecture that uses a middleware abstraction layer to communicate with multiple backend systems (currently SmartFox & Photon) and frontends (currently Unity3D)

For the layperson, it means that aside from the 3D environment you’re interacting with, Jibe can bring in data from other systems. Whether it’s web content, 3D or other graphical content, it can be integrated into the viewing experience. Here’s a graphic demonstrating it:


(Click on the picture for full size)

For the educator / clinician wanting to create an immersive and realistic simulation, all this is essentially technical information that doesn’t need to be known. Which leads me to what impressed most with Jibe: the interface itself. Although I’d argue the overall browsing interface could do with some input from a designer (read: it’s not pretty), it does intrinsically work and removes a lot of the downsides research to date has shown about using Second Life or OpenSim on its own. Let’s use an example:

(Click here for the full size version)

What you’re seeing here is my avatar standing in front of a model of a virus. In this case it’s the Human Immunodeficiency Virus (HIV). I can rotate the virus, check out all it’s aspects and when clicking on it, related information (in this case simple Wikipedia entry on HIV) opens in the same browser window I’m using. Obviously it doesn’t have to be Wikipedia – the sky’s the limit. Above the Unity window are a bunch of social media buttons allowing you to share information you’re interacting with.

If you refer to the Jibe graphic further above, remember that essentially anything can be bolted into the Jibe platform. If I were to use this platform I’d be looking at some sort of database connection that allowed me to have common clinical pathways viewed step by step as an avatar completes the task. You can also choose to have key content networked or local, meaning you could ‘phase’content for people progressing through the simulation at different paces. It’s all doable, it just takes time to set-up. As far as creating content, any pre-fab stuff from the Unity3D store can be pulled into Jibe.

If you’re interested in finding out more, this introduction to Jibe is useful. As far as pricing goes, check out the ReactionGrid Shop for options.

After spending around an hour being shown around Jibe, it really struck me how far advanced it was as a tool for educators compared to Second Life in particular. That statement is very dependent on the type of education occurring, but for more intricate work, the Unity3D interface combined with the use of well recognised standards for better interoperability, makes Jibe a very tasty option indeed.