Sunday, 31 July 2011

Frag Dolls Multiplayer Madness Video

I already wrote about the Frag Dolls multiplayer madness on OnLive. Well, here's the result of this madness in video form, courtesy of Justin.tv. It's split up in two parts of 4 hours combined length. Enjoy watching the Frag Dolls play Assassin's Creed Brotherhood multiplayer with other OnLive gamers.


Watch live video from Frag Dolls on Justin.tv


Watch live video from Frag Dolls on Justin.tv

The Future of Gaming: Android Tablets with OnLive

The tablet computer invasion has started and will soon take over from the PC and laptop. The power inside the new tablet computers is comparable to high end laptops from just a couple of years ago. But is a tablet really ready to take over your smartphone, laptop and your games console? The tablet is seen as a great way to surf the net, chat with friends and do some light email but is it seen as a high gaming console and could it take over from such mobile gaming platforms such as Nintendo DS and Playstation Portable?

The future of console gaming will be in your hands. And probably on an Android powered tablet. With Nvidia and OnLive both being supported on the latest Android tablets we can expect Xbox360/PS3 levels of graphics on a tablet computer. But that is not all. With HDMI out on some of these tablets the user is able to plug the tablet into a HD TV and play their games directly on their big screen TV at home. With bluetooth on board as well you can easily add a wireless controller to the mix as well!

Nvidia Tegra 2 Chip

The world’s first mobile super chip, NVIDIA Tegra brings extreme multitasking with the first mobile dual-core CPU, the best mobile Web experience with up to two times faster browsing, hardware-accelerated Flash, and console-quality gaming with an NVIDIA GeForce GPU.

Nvidia has recently launched their free Tegra zone application for Tegra 2 equipped smart phones and tablets. The app is a showcase of all the upcoming games that are based around the Tegra 2 chipset. Basically all tablets and phones that have the Tegra 2 chipset such as the Motorola Xoom will be able to run these games.

The line up of current games is pretty light but there is potential for some great games here.

OnLive gaming

The OnLive gaming system is a new way to play your favorite games from your favorite producers. OnLive doesn’t require you to have a games console parked under your TV. It also doesn’t require you to buy any game disks or cartridges. OnLive works directly over the internet so all that is required is a broadband connection and a HD TV and a small micro console.

OnLive provides a wireless controller and allows instant access to games over the internet. The graphics and sound are equivalent to XBOX360 and PS3. It works via cloud computing, meaning all the processing work is done on servers and only the images are broadcast back to the gamer. But what is more important is that OnLive gaming will be supported by some Android tablet computers. Currently the only tablets that officially have OnLive are the HTC Flyer, the iPad and the iPad2, but expect it to be available on future platforms too. You can get the riped OnLive Viewer APK for other Android devices. OnLive on tablets is currently available only in Viewer form which lets you use all features of OnLive except playing games, but that is about to change this fall when the OnLive Player launches.

SOURCE: Android Applications Expert Blog.

GTTV says cloud gaming is the future

Judging by the fact that they include footage from OnLive gameplay, they think OnLive might be part of that future. They also talk about the multiplayer and social experiences cloud gaming will enable, like one on one sword fights and mass spectating of live gameplay. The video also includes Hulk Hogan, Sasha Grey and Mark Hamill, something's gotta give.

Here's the GTTV video, the part about cloud gaming and OnLive starts at the 13:40 mark.

Gaikai puts up demos of Dragon Age II, Mass Effect 2 and Dead Space 2

You can play demos of Dragon Age II, Mass Effect 2 and Dead Space 2 on Gaikai, provided that you meet their criteria, which is not an easy task to achieve. I don't know if that means that EA has taken over business at Gaikai, with so many ex-EA execs at Gaikai that was bound to happen, as i wrote earlier EA wants to make Gaikai it's bitch. It could also mean that business is not going according to plan over at Gaikai.

This is a video of one of Gaizilla's and Judas' children. It's not a coincidence that it inspires horror in the person who plays it. The poor soul who had to play the demo didn't like the floaty controls, the poor sound quality and the fact that he demoed the game on a Mac, though if he wanted to buy Dead Space 2 he could only play it on a PC, a powerful PC that is. He also didn't like EA's business decisions like EA removing games from Steam and of course EA's betrayal of OnLive.

THQ Officially Pulls the Plug on Red Faction

Due to disappointing sales for the past two games, loyal OnLive publisher THQ announced this morning, that it is discontinuing the Red Faction series of game titles.

“Given that that title now in two successive versions has [only] found a niche, we do not intend to carry forward with that franchise in any meaningful way,” THQ investors were informed by THQ CEO, Brian Farrell, in an investor conference call held early this morning by top THQ execs.

Disappointed with first quarter financial performance, Farrell stated, “sales of Red Faction: Armageddon, and our licensed kid’s titles were below our expectations, and the release of UFC Personal Trainer, also adversely impacted the quarter.”

“In today’s hit-driven, core gaming business, even highly-polished titles with a reasonable following like Red Faction face a bar that continues to move higher and higher,” Farrell re-iterated, and unfortunately for the fate of Red Faction, the game “did not resonate with a sufficiently broad console gaming audience.”

Farrell went on to inform investors that, “Moving forward, our core game titles must meet a very high quality standard with strong creative and product differentiation, appeal to a broad audience, and be marketed aggressively.” Farrell went on to state that due to poor sales of key titles like those in the Red Faction series, the company would be revising its internal game review process, to insure all future titles meet the higher quality bar associated with critical and fiscally successful IPs.

Despite the negative future for the Red Faction franchise, the future does still appear to be stable for Red Faction developer, Volition, who is currently putting the finishing touches on Saints Row: The Third. The game is the third installment to the hit open world, sandbox action series, Saints Row, which is due for release on November 15th. Volition is also busy working with writer, director Guillermo del Toro, on his upcoming project. That project is inSANE, a survival-horror title that was announced at last year’s Spike VGAs, and not due for release until some time in 2013.

Farrell pointed out, that even though the company is no longer planning any future games in the Red Faction series, that other key titles for the remainder of 2011, namely Warhammer 40,000: Space Marine, Saints Row: The Third, and WWE ’12, were on track to meet their assigned release dates this fall. Farrell went further to remind investors that future projects from Left 4 Dead developer, Turtle Rock Studios, and at THQ’s newly opened Montreal studios, now headed by former Assassin’s Creed series creative director, Patrice D├ęsilets, were also in the works. He also emphasized that the future outlook for THQ as a whole, is rather rosy as a result.

SOURCE: OnLiveFans.

Mova uses FACS and Pixel Liberation Front create virtual camera systems to make the Green Lantern Shine


Many people understand at least the basics of what is involved with animating characters, blowing up buildings, and expanding sets to include wild virtual vistas. But what happens before the FX artists are handed their tasks? While Green Lantern was still just a glow in Warner Brother's eye, Mova and Pixel Liberation Front were already at work building the foundation of the film.

Reality capture FX house Mova contributed facial capture to create the facial rig for the three actors Mark Strong (Sinestro), Temuera Morrison (Abin Sur), and Ryan Reynolds (Green Lantern), while previsualization FX house Pixel Liberation Front (PLF) handled a mix of previs, postvis and final visual effects.


Mova
Mova, the company founded by OnLive CEO Steve Perlman, produced the data needed to create believable facial animation through two specific steps, the capture process and the data delivery, under the supervision of Motion Capture Supervisor & General Manager Greg LaSalle, and Head of Motion Research, Ken Pearce. The first step in the Mova process is melding with the clients' pipeline to accommodate the massive amount of information.

"We create more facial data than people are used to getting," said Pearce. "In markers you can get maybe 100 or 150 data points tops, and we are giving people hundreds of thousands of points of data," requiring the FX house to come up with a process that can use the exceptional amount of information and Mova to decide what data needs to be sent.


How do they produce so much more information than marker based systems? It starts with applying a film of phosphorescent makeup to the actors face, and clothing when required.

The random pattern ultraviolet makeup used is applied to the actors face using an airbrush and is totally invisible to the naked eye. However, the Mova camera system does see it, and is able to capture the makeup information through the synchronized camera system.


Mova's capture process uses a number of cameras that are calibrated in space so they know their locations. The Mova system captures the information using white light and UV light that strobe back and forth faster than the naked eye can perceive. Color cameras pick up only the face textures and alternating UV lights flash on and off, triggering the makeup that is picked up by the black and white cameras.

They capture multiple streams of video over the course of several hours. That information is delivered to the client who selects what parts they want to process. "We found expressions for each actor that we wanted to use as a neutral expression, the ground zero for all of the emotion," said Pearce.


"The scan that come out of our system are like scans you would get from a laser scanner except we are scanning over time, so we've got 24 scans of motion per second." To handle registration, they selected a frame with an appropriate expression, and the client sets the character model to the same expression.

That becomes the starting point for how the data drives the character animation. While Mova's technology is able to capture textures and performance at the same time, "for this project," said LaSalle "we didn't capture textures since it is retargeted data. In this case the primary goal was to get the face shapes so an animation rig could be built."


Some of the Green Lantern actors required black and white dots painted on their faces to be used as tracking reference, such as around Reynolds's eyes to show where the edge of the mask would be.

"We did various tests to make sure our makeup wasn't going to interfere with what they were doing and that their dot makeup wasn't going to interfere with our processes. That proved to be completely problem free." Another first was capturing facial prosthetics. "That worked fine too," said LaSalle.


The scans result in a tracking mesh with far more data than marker based capture systems can deliver. In the case of Green Lantern, that was around 2,500 points. Pearce explains "We gave them a tracking mesh with a vertex that matched every dot for their dot set.

So in one data file they would have the tracking mesh data, the dot data as geometry and images from the shoot that were texture mapped on to the scans so they could also see the dots as a texture map on the actors face. That was something we hadn't done before and all worked very smoothly."


Mova did a mix of FACS (Facial Action Coding System) poses, a series of facial expressions, a series of transitions and motions from one expression to another, and lines of dialogue. Said Pearce, "This film really made us start to automate our pipeline for solving FACS poses more. Everything pushes you to evolve a little bit in some direction, that's one of the things that came out of this job." Mova delivers the job in common file formats. A software systems builds the motion files in Maya and the FACS are delivered as OBJ's.


PLF
PLF had its hands full with nearly 2,000 shots of previs and thousands of shot revisions, all under the supervision of Kyle Robinson. Working in Maya, they collaborated with the DP's lens package and proper film back to assure the resulting images "looked exactly the way they were supposed to look once you looked through a real life camera.

If you don't get that piece of information correct, everything you do is wrong." This step provides set information such as how much blue or green screen is needed, how much set extension needs to be filled in digitally, or how many CG characters are in the background.

Next, the characters, vehicles, environments and locations, are constructed digitally and the sequences are animated to real world specs. Once all the elements are assembled, the sequence is edited with sound on Final Cut Pro and shown to the director "to fine tune the edit, trimming frames, swapping shots, change the angle. That is handed to every department who then does a breakdown" listing what locations need how many shots, how many shots can be done in a day, how many days each location needs, how many background extras are required, how many days are needed for special effects. All before anyone even begins shooting.


As with all films, Green Lantern required specific tinkering to get the proper look. One stunt required a speed study. PLF built the location and vehicle. "We did different miles per hour tests to see how things would react when the vehicle went through the set."

Testing the speed in increments and reviewing how the set responded to various speeds, they ran the simulation through various camera angles until the second unit director decided on the one he liked best. "Then we'd go out to the set and they would rig the gag up, run it at the MPH we had figured out.


That was a pretty interesting challenge, the true essence of previsualization is to figure out a stunt or effect, work on it, decide on something you like, bring it to the real world, and work with the crew to get it to exactly match what the second unit director wanted." It's not as easy as it sounds.


This was one challenge that kept Robinson up at night. The set was only so big and anything going through at 30 MPH would go through in seconds so PLF needed to find ways to stretch the set real estate to make the set appear bigger than it was.


Luckily, Green Lantern didn't require all things to be limited to real world specs, but the message still had to deliver. The constructs created by the Green Lantern's imagination is a good example of this. A construct is the materialization of the Green Lanterns' thought, his power.

The ring gives him the ability to materialize whatever is in his imagination, constructed from the green energy of willpower. It manifests itself into a material object. If he needs a gun, he has one, and it's a working piece of machinery until he stops thinking of it and it disappears.

The Green Lantern usually makes a construct when he's in combat, and it's almost instantaneous. "How do you communicate the whole philosophy of what a construct is in eight frames of action?" asked Robinson.


PLF helped the Art Department do R&D and design development on how the constructs were. "The art Department had a crystal clear idea of how they wanted it to happen.

There were different stages where the construct came out and they wanted to see that fleshed out in animation" to define the dynamic energy of the constructs coming to life. "We went through several iterations of different styles on how the constructs were made, the speed, how transparent, how green," says Robinson.

"That was fun because we got to do a bunch of shaders, dynamics, and completely go off the conceptual end of things and just try wacky stuff." While this area isn't always handled in previs, often they are working hand in hand with the art department and can turn over ideas quickly, speeding up creative decisions by handing over a visual. "Of course that doesn't mean this is what the final idea is," Robinson clarified, "but that is the hand-off from the art department to the visual effects department" giving them a strong visual reference to what the art department is looking for.


PLF uses a virtual camera environment system where they do virtual production, a process to assist the art department, production designers, or DP's visualize a set space with a virtual camera.

"Motion capture cameras will set up a volume," explains Robinson. "From that, we have a Wacom Cintiq tablet which is getting feedback from the computer. It's attached to a camera in a 3D software package. When you turn in real life, it turns the camera in the virtual environment.


So the art director can build a beautiful world in the set, then show the director still images of it." It's an idea they picked up from Avatar, but is an in-house proprietary engineering. "What Cameron had was around 20 guys on set who would do real time animation.

They would update the set for him on the spot. We don't have the team to do that, but we had enough so that the director could say move this element there, scale this wall, put a window here. Basically allow him to experience the space and kind of dictate adjustments to the environment while he's in it and long before it's built."


PLF used that same motion capture system for the stunt team in a fight sequence. The stunt actors donned motion capture suits and acted out the choreographed fight sequence. PLF retargeted that information on to their digital characters. "Once we have the characters doing the motion capture augmented in virtual reality with some key framing animation, we drop them into the set the director and production designer just scouted with the virtual camera.

Now you have the characters animated doing the choreographed fight sequence in the environment that was just approved. Now, long before they get to the set, to shooting, to the blue screen setup, the director can go in with the same virtual camera and do maneuvers around them and see their action. This way there is a very clear vision of what is expected by all the actors, stunt players, and crew."

After previs, there is postvis. "That is where it gets technical. You start tracking empty plates and adding your low res models in there so that you can get shots with elements to the editors so they can start cutting the movie." Again, Green Lantern's requirements made this no small task. When they were doing postvis and making elements and shots to be edited for the studio temp screening, they delivered around 800 temp shots just for the screening with a team of just six people.

SOURCE: CGSociety.

Saturday, 30 July 2011

OnLive CEO Steve Perlman and his DIDO Wireless Tech

Silicon Valley’s self-styled Thomas Edison has found a way to increase wireless capacity by a factor of 1,000.

OnLive CEO Steve Perlman and Antonio Forenza with DIDO test carts
Lunchtime, Lytton Ave., Palo Alto, Calif.: It’s a bright, mild July afternoon, and khaki’d professionals meander past the boutiques and coffee shops, heading back to their digital workstations. One of the slower pedestrians, who gets more than a few curious glances from passersby, is a middle-aged guy in jeans and a green T-shirt, carefully rolling a utility cart down the sidewalk. The cart is one of those black, plastic, double-decker jobs you find at a home-improvement store. It’s laden with electronics and has white vinyl plumbing pipes that stick into the air from two corners. “It’s a very small group of people that actually turn the wheels around Silicon Valley,” says Stephen G. Perlman, the Silicon Valley inventor, entrepreneur, founder and CEO of OnLive who once sold a company to Microsoft (MSFT) for half a billion dollars, as he hunches over to keep the gear from jostling.

“What’s that?” asks an onlooker, a scruffy guy with gray hair and a beard to match. He looks like he’s been to a few too many Grateful Dead concerts.

Perlman patiently explains that he’s developing a new type of wireless technology that’s about 1,000 times faster than the current cell networks. It will, he says, end dropped calls and network congestion, and pump high-definition movies to any computing device anywhere.

“Huh. Cool,” says the guy, evidently deciding that Perlman is some sort of technological busker. He dumps a handful of acorns on Perlman’s cart and walks away. Perlman shrugs: “You get all kinds here.”

Now that he’s stopped in front of the Private Bank of the Peninsula, the demonstration is about to begin. It’s the first he’s ever given of his latest technology on the record. He points to the laptop on his cart. There’s a square with purple dots dancing around like television static. Perlman calls his office and tells an engineer to activate some software. Suddenly, the dots form a tight ball in the center of the screen. Perlman explains that the antennas, fastened to the ends of the plumbing pipes, have just picked up a radio signal sent from his office across the street. “It’s almost like magic,” he says.

A radio signal from point A to point B is hardly magic, but it isn’t just any signal his utility-cart contraption has picked up. This one reached B without encountering any hiccups or degradation of the sort familiar to anyone who tries to make a mobile call or watch a streaming video on a smartphone. The tight ball of dots represents what Perlman calls “the area of coherence,” and it means the device has found a pure signal.

Perlman named the technology DIDO, for Distributed-Input-Distributed-Output, a wireless technology that breaks from the time-tested techniques used for the past century. DIDO, he says, will forever change the way people communicate, watch movies, play games, and get information.

“They thought we were crazy,” Perlman says of the response he got from scientists during the concept stage of DIDO. He says he had his first inklings of the technology around 2001. He’d been demonstrating his Moxi Media hub on the trade-show circuit, specifically showing off its ability to handle video over Wi-Fi. His demos always ran smoothly in those days—this was before laptops, smartphones, and tablets owned by every conference attendee clogged up the Wi-Fi. But he feared the coming bottleneck. “As soon as everyone saw how convenient this was and started sharing the network en masse, we were doomed,” he says.

Wireless networks all suffer from a basic limitation: interference. Radio signals are waves. If you’re watching Netflix (NFLX) on your iPad via Wi-Fi, the tablet’s antenna is receiving a signal from a transmitter. If no one else is around—and you’re in a room with thick walls that block other radio signals—you’ve got a great connection. If someone else has an iPad in the room, each person ends up with half the maximum data speed. Throw a second Wi-Fi signal into the mix, perhaps from another office or home, and interference becomes an issue. Both signals hit your iPad at the same time, and the device has to try to discern the movie from this noise. People in apartment buildings or at crowded coffee shops know all too well just how shoddy a Wi-Fi connection can be when lots of signals collide.

Cellular operators like AT&T (T) and Verizon Wireless (VZ) face similar problems. They would love to put up towers all over the place, but they can’t. Signals from towers bleed into each other, causing interference. One tower covering a certain area works fine until too many nearby users make calls or pull up Web pages at the same time. That’s when data transfer rates fall and calls drop, aka iPhone syndrome.

Perlman had an idea. Interference happens when a device receives multiple signals at once and the wave is muddied. The physics gets very complicated here, but Perlman thought there might be a way to turn interference into a virtue—use that combining property of radio waves to “build” a signal that delivers exactly the right message to your iPad. Multiple transmitters would issue radio waves that, when they reach your tablet, combine to produce a crystal clear signal. If there’s another person in the room with an Android phone or a laptop, the system would take those devices into account so that they, too, received unique waves from the transmitters. Such a system would need to precisely analyze wireless information from the devices at all times, and constantly recalculate the complex combinations of signals from each of the transmitters on the fly. Figuring all that out in real time would of course require some extremely powerful computers.

That, in a nutshell, is DIDO. About seven years ago, Perlman set out to assemble a team to test the concept and, assuming it worked, build it. He shopped his idea to scientists at universities around the country, and eventually found a taker, a PhD candidate at the University of Texas named Antonio Forenza. After graduating with a degree in electrical engineering, Forenza agreed to work for Perlman and has spent the last few years of his life bringing DIDO to life.

Forenza built a makeshift lab at his house in Austin, Tex., which overflows with equipment—antennas, radios, power supplies, and computers. Much of this equipment goes toward testing DIDO in urban and suburban settings, but Forenza has a rural setup as well. He’s conducted long-distance tests linking Austin with ranches about 25 miles away in Pflugerville and Elgin. Forenza purchases prefab mini-barns from Home Depot (HD) and packs them with his homemade gear. “We’ve had cows chew up our cables,” Perlman says. “And some amorous bulls get friendly with our antenna masts.”


To make DIDO work, Forenza and other members of the Rearden team developed three basic components. First, there’s a server in a data center that uses a complex set of algorithms to create a “digitized waveform”—the unique, interference-resistant message that will reach someone’s computer or phone. The server sends this message over the Internet to the second component, a small device that could sit in an office or a home, much like today’s Wi-Fi routers. That device then delivers the message to a phone, laptop, tablet, or TV that’s equipped with the third part of DIDO, a special antenna.

All three components are in constant communication. As a person moves around with a smartphone, the server recalculates and keeps crafting new waveforms. The result is a consistent, full-powered signal, rather than one that’s shared with other nearby devices. In urban settings, a DIDO transmitter can cover about a mile. That’s a huge leap over the 100 feet to 300 feet for Wi-Fi access points that must limit their broadcasting oomph to avoid interference.

Since electromagnetic noise does not affect the DIDO transmitters, they can be placed anywhere. They’re small, too, which could mean no more not-in-my-backyard fights over the placement of unsightly cell towers. The multi-city tests conducted by Forenza also showed that DIDO transmitters could be tuned to bounce signals off the ionosphere, a layer of the atmosphere about 150 miles up. Using this technique, the technology could serve rural areas and even airplanes. “We can provide DIDO service down to the floor of the Grand Canyon,” Perlman says, adding that he could cover huge swaths of rural America with high-speed wireless using just dozens of DIDO access points.

There’s plenty of work left to prove the mettle of DIDO in real-world conditions. The tests to date have been conducted on the amateur radio spectrum with a maximum of 10 people communicating simultaneously, and the software that performs the complex calculations behind the scenes is still buggy. But as the geek saying goes, those are engineering problems, not science problems. Richard Doherty, who is director of the tech consulting firm Envisioneering has examined the DIDO system and says it’s breathtaking. “Steve needs to put up more transmitters and play around with different wavelengths,” he says. “He’s talked about simulating 1,000 times performance improvements over cellular, but there’s no reason why even greater gains might not be possible. Steve’s discovered things that aren’t in any of the textbooks or the patent roster.”

The greatest obstacle for Perlman, as Doherty sees it, may be the telecommunications industry, which has invested billions setting up conventional cellular networks. “The current use of radio is bound more by inertia and successful lobbying efforts than by efficient use of spectrum,” Doherty says. “But Steve has shown the old models are limited, and there is something else we can do. People will demand this.”

Now that he has proved the technology works, Perlman has started to receive investor interest in DIDO. He declines to reveal the names of any specific organizations, but says that European groups have requested the most information. “Frankly, we’re getting more interest from foreign governments than the U.S.,” he says. “It is very likely the first widespread use will not be here.”


SOURCE: Businessweek.

OnLive CEO Steve Perlman, The Edison of Silicon Valley

Perlman, who’s 50 and has been building companies and technology for 30 years, has earned a reputation as a showman. But, like a boastful 19th century explorer who has to raise money and excitement to launch his expeditions, Perlman really does discover new lands and species. Not long after graduating from college, he got a job at Apple (AAPL), where he helped create QuickTime, which let people play movies on their Macs. Then he started WebTV, one of the first services to link the Internet with TVs, and sold the company to Microsoft in 1997 for $503 million. Perlman has secured about 100 patents and has 100 more awaiting review. “We don’t really have a Thomas Edison or a Henry Ford pumping out inventions,” says Richard Doherty, who is director of the tech consulting firm Envisioneering and is familiar with Perlman’s DIDO system. “Steve is coming close, and he’s still a young man.”

Steve Perlman
Most of Perlman’s ideas come to life at Rearden, a business incubator in San Francisco that he created in 2000. Rearden has given birth to Moxi Digital, which made a combo DVR, music player, DVD player, and interactive set-top box (a forerunner to the home digital media hubs now coming to market), and Mova, a company that solved the problem of capturing actors’ facial movements to allow for computer-generated effects in movies. OnLive is a Rearden-backed service that has made it possible to play graphics-rich video games—the bulkiest of data—via the Internet.

“Rearden,” it’s worth noting, comes from one of the protagonists of Atlas Shrugged, the novel by Ayn Rand. In the book, Hank Rearden is an honorable genius who battles the small-minded meddlers trying to bring him down; he invents a stronger kind of steel and gets the girl. Perlman has since tried to distance his incubator from Rand and her politically divisive libertarianism, but he still talks like one of her characters, especially when he gets going about Silicon Valley and his place in it. He figures that this region once teeming with risk-takers has grown soft and unadventurous. Venture capitalists have succumbed to funding Internet eye candy like social networks and coupon services at the expense of breakthrough inventions. And yet a few people out there—him, for instance—are still willing to do the whole blood, sweat, and tears thing. “People have decided they don’t want to invent anything new anymore,” Perlman says. “To hell with them.”

DIDO, Perlman says, will right the wrongs of the wireless networks crumbling under the weight of iPhones, Android smartphones, and tablets—and create a platform for completely immersive digital experiences. He wants to build Mova facial-capture technology right into TVs and computer monitors, so people’s heads could replace those of characters in video games. “You can become Batman, and the other players in the game will see your expressions,” Perlman says. He’s also exploring virtual retinal technology. “It’s a new form of optics that allows you to see the world in 3D. It’s not just an image coming out of the TV screen. It’s viewing your entire surroundings in 3D and having them be totally virtual.” Perhaps wireless technology could be used to create standing fields, he says, so people could one day reach out and touch the virtual 3D objects. His description sounds a lot like a Holodeck, a room depicted in Star Trek where anything can appear as real. “We’re looking at creating entire virtual worlds,” Perlman says. “Eventually, we will get to the Holodeck. That’s where all these roads lead.”

Rearden’s main office is near the entrance to the San Francisco Bay Bridge in an old building that used to house a printing company. The two-story space looks like a large, luxurious library crossed with a TV studio, with dark wood paneling along the walls and towering wooden beams in the center of the main room. Desks take up the first floor. The second is lined with shelves filled with toys, movie scripts, and research files on historical figures collected by Perlman employees who travel the world in search of good stories. Dozens of high-powered lights dangle from the ceiling, flanked by speakers. There are motorized lifts for moving bulky cameras, bars that hold up giant blue-screen backdrops, and grand, sliding whiteboards for brainstorming. Up top, one of Perlman’s staffers works at a state-of-the-art editing bay where Perlman produces TV and Web commercials for his various companies.

Perlman’s blend of fantasy and technology goes back to his childhood. He sums up his early years with four salient facts: He was born in 1961, grew up in Connecticut, his parents were both physicians, and they denied him an Apple II computer. They feared he would spend all day playing video games on the blasted machine—and they were right. “I was forced to build my own computer and create a graphics display for it and then write video games that I could play,” Perlman says. From that point on, Perlman began scrounging around computing stores for parts and sending off faxes to order the latest chips coming out of Silicon Valley to build more machines.

Perlman would use this self-taught ability to understand electronics and computers as a way of getting out of jams. During his senior year in high school, he skipped so many classes that he was in danger of not graduating. So he built an illuminated marquee for the drama department to secure an English credit. Then he designed a computer simulation of the forces behind swings in the U.S. economy during the 19th century for a history credit. Later, while attending Columbia University, Perlman says he took a computer-programming class and taught himself Pascal during the open-book, midterm exam.

Perlman graduated with a computer-science degree and headed for Silicon Valley. At 23, he started work at Apple with the lofty title of principal scientist. “He just made stuff work,” says John Sculley, Apple’s chief executive officer at that time. Perlman emerged as one of the key developers of Apple’s QuickTime technology. Most of the QuickTime team worked together in a shared space. Sculley won’t go into details but says Perlman didn’t always fit in with his colleagues. He eventually gave Perlman his own space near the CEO’s office. “It made sense for him to focus entirely on doing his own work,” Sculley says.

After Apple, Perlman started WebTV in 1995. At a time when many people were just discovering the Web, Perlman and his team built a $300 device that could turn a television into an Internet appliance capable of browsing sites and sending e-mails. At one point, he paid a staffer to travel to the island nation of Tuvalu, attempting to secure the rights to a TV domain name. The Prime Minister received a Mac and a printer, and returned the gesture by sending Perlman shell necklaces. “We traded silicon for calcium, I guess,” says Perlman. “We did get the domain, but the government was overthrown and the new regime did business a different way.”

Aaron Burcell, an early WebTV employee, remembers Perlman’s management style as a mix of hard-headed business strategy and technology evangelism. Most people bought into the message—it was “emancipating the Internet” back then. “I don’t want to say it’s a religious experience, but we found the way he talked inspirational,” Burcell says. Those who fought Perlman’s reasoning or requests would face his temper. Andy Rubin, the senior vice-president of mobile technology at Google, describes Perlman as being consumed by a drive to create breakthrough technology. “I think true visionaries push their employees really hard,” says Rubin, a close friend of Perlman. “You have to be signed up for a 24 by 7 type of deal. Some people can do it and some can’t.”

In 2003, Rubin had run out of money while pursuing some new cellphone software at his startup Android. “Without flinching, Steve brought over $10,000 in cash in an envelope,” Rubin says. Perlman declined to take any equity in the company, which would later be acquired by Google and become the basis of its smartphone software empire. “I think he would rather invest in his own ideas,” Rubin says. “Steve only thinks about the things that will change history.”

Perlman laid the groundwork for his total-immersion-media vision in 2004 when he started Mova. Most motion-capture work involves affixing sensors to actors’ bodies, then turning their movements into computer-generated movie characters. Perlman wanted to tackle the more difficult prospect of capturing peoples’ faces with all their subtle gestures, sly crinkles, emotions.

One early idea: Submerge actors in liquid and grab their faces via ultrasound. “There’s this one experimental fluid that exists where people can be submerged and still breathe without a mask,” Perlman says. “It’s been tested on rats.” Before trying it with humans, the engineers discovered that something simpler—Halloween makeup—could be applied to an actor and allow a camera to track more than 10,000 points on his or her face. The technology made its debut in The Curious Case of Benjamin Button to handle the reverse aging of Brad Pitt’s character. It has since been used in the Harry Potter films, The Incredible Hulk, and TRON: Legacy.

The technology has gotten good enough that Hollywood actors have started asking Mova to capture their faces while they’re young. The hope is that their youthful, virtual visages can keep earning for them in the decades to come. He won’t name names. Video game makers have also turned to Mova to make their characters more realistic. Perlman plays a clip for an upcoming game where a mad doctor tortures a man strapped to a chair by shocking him in the neck and stabbing him with a syringe. The veins in the man’s neck throb, his face tenses, and pain sweeps over his face. It’s only after the clip has run for a couple of minutes that Perlman reveals that the man in the chair and the doctor are computer-generated. With OnLive, Perlman has built an entirely new set of data-compression and networking technologies to bring interactive games with that kind of photorealism to the home. Could DIDO take the next step and deliver all that wirelessly? “That’s the plan,” Perlman says.

Should DIDO become widespread, it’s a safe bet Perlman will have moved on to some new breakthrough by the time it does—like the heroine of a movie he wrote about a decade ago. The script, set in present-day Silicon Valley, tells the tale of a student at Stanford University. She’s created some virtual retinal technology that lets people see computer-generated 3D objects in the physical space around them. The woman sets out to form a company to develop the technology, bringing along a motley crew of hacker and stoner friends.

They watch as venture capitalists woo the woman with makeovers and trips on private jets. The late nights in the lab give ways to parties and investment presentations. “Her friends don’t recognize her anymore,” Perlman says. “I have seen this happen time and again.”

The plot is Ayn Rand. The triumphant ending is pure Steve Perlman. Slowly but surely, the VCs coax her into bad financial deals. Her work grinds to a halt. All seems lost. But then the old crew of misfits comes to the rescue and gets the girl to go back to the garage where they use borrowed parts and scraps to bring the new technology to life. “That’s my kind of company,” Perlman says. “That is what Silicon Valley is like for those of us that work all night long. It’s crappy offices. It’s borrowed spaces. Welcome to my world.”

SOURCE: Businessweek.

World's first Deus Ex: Human Revolution review, a game that puts almost everything else in the genre to shame

You can already pre-order Deus Ex: Human Revolution on OnLive. Deus Ex: Human Revolution releases on August 23, 2011. The worlds first review of Deus Ex: Human Revolution is here, courtesy of PC Gamer. The first review is very favourable as Deus Ex: Human Revolution review scores 94 and an Editor’s Choice award in PC Gamer.


PC Gamer delivers their verdict on Deus Ex: Human Revolution in the latest issue of PC Gamer UK, which will hit news stands next Wednesday August 3. They awarded it a score of 94 and an Editor’s Choice award.

In the huge, eight-page review the reviewer Tom befriends turrets, throws vending machines off buildings, hacks into security terminals, resolves hostage situations and dismembers foes with augmented fist-chisels to conclude that Human Revolution “is absolutely the Deus Ex of our age, a genuinely worthy prequel, and a game that puts almost everything else in the genre to shame.”



SOURCE: PC Gamer.

Thursday, 28 July 2011

OnLive lets You Launch Directly into an User Profile

OnLive lets you sign in and launch directly from a web page link to your user profile on OnLive. If you know other profile names on OnLive, be it friends or strangers, you can of course also launch into their profile. If an OnLive member has a private profile, the message pops up that player information is hidden due to privacy settings.

Let's say you wanted to get to the OnLiveFans profile page on OnLive, the link would look like this http://www.onlive.com/launch/onlive/profile/onlivefans

Or if you want to get to a profile that has whitespace characters like the OnLiveFans com profile, you simply add the whitespace characters into the link, like this http://www.onlive.com/launch/onlive/profile/onlivefans com

If you feel like watching some ladies play games on OnLive you can get to the Frag Dolls profile page, the link is http://www.onlive.com/launch/onlive/profile/The Frag Dolls

OnLive has stated that they intend to follow in the footsteps of Facebook with the OnLive Game Service. With the recent integration of Facebook, the ability to launch free game trials from web page links and now the possibility of launching directly into OnLive user profiles from web page links, it clearly shows that they are working hard on this intention.

OnLive Console - My Head is in The Clouds

I've been a member of OnLive since late 2010. As much as I may like it, I simply can't stand playing games on the PC (with the exception of Rift). Due to this hatred, I must admit that I rarely used OnLive, except under those console-less situations I found myself in.


That was of course until I discovered the OnLive console. A few weeks ago the lovely folks over at OnLive sent a console my way for review, and I am pleased to announce that it truly is the future. The system is no larger than a smartphone, the controller is comfortable and lightweight, and the ease of which games are playable is astonishing.

I think it's safe to say that I prefer playing via OnLive over any of the "other" consoles out there.

When you open the OnLive console box you are greeted with the console - a small black box roughly 4 inches long by 3 inches wide - a controller, ethernet and hdmi cables, and both a rechargeable and battery-longing battery pack.

Set up is incredibly easy as well. Simply plug the ethernet cable into the OnLive system, plug the OnLive system into your tv, turn on the wireless controller, and it's game on. Games are purchased via the Marketplace, and roughly seven seconds after purchase are completely playable. No updates, no installation, no time to make yourself a sandwich before the Main Menu.


What truly amazes me is how comfortable the controller is. Both analog sticks are easily accessible and thumb-friendly. The triggers are in the perfect position, ultimately edging out any controller on the market. Being that the games are being streamed to your television, one may suspect there would be severe lag. News flash - there isn't. The only reason there may be lag would be due to your internet connection, and even in then it's a rarity.

So far I have played roughly two-dozen games, including Assassin's Creed Brotherhood, Homefront, Darksiders, Red Faction Armageddon, Borderlands, Tomb Raider Legend, Pure and Just Cause 2. In games with a multiplayer aspect (Homefront and Assassin's Creed Brotherhood mainly), the controls were as responsive as their console counterparts.


Due to the simplicity and ease through which one can play each game, OnLive is blurring the line between hardcore games and casual gaming. Without moving an inch I can switch from Red Faction to Homefront to Borderlands in a matter of seconds. No starting times, no loading issues, no time wasted switching discs and waiting for the latest unnoticeable update. Cloud gaming is the future of the industry, and OnLive is doing a tremendous job of leading the surge.

I am now a believer, and OnLive deserves such respect.

SOURCE: The Real Strength Gamer.

F.3.A.R. Multiplayer To Make Its Way To OnLive’s PlayPack

OnLive has had some amazing FPS’s (first person shooters) grace the cloud over the past few months, but some people who can’t afford fifty bucks a pop may feel left out. OnLive seems to understand this as F.3.A.R. multiplayer makes its way to the PlayPack bundle, marking the second time OnLive has released multiplayer only FPS’s to the bundle.


On Tuesday, August 2nd, F.3.A.R. multiplayer will become the 73rd title to make its way to the PlayPack, giving that much more incentive to pick up what is quite possibly the greatest deal in gaming.

Boasting four multiplayer modes, F.3.A.R. will keep you coming back for more, sporting what has been acclaimed as the greatest cover system seen in FPS’s to date and with emphasis on cooperative play, you’ll never face F.3.A.R. alone.

It’s interesting to see how OnLive has used both PlayPasses and the PlayPack to compliment each other. While many may want the full game, some people honestly buy games such as F.3.A.R. and Homefront for the multiplayer action and OnLive making the multiplayer only available with the PlayPack gives gamers a choice on just how much of a games content they want. Who wants to pay full price for a game you’re only playing for the multiplayer anyway? Hopefully this is a continued trend with OnLive, giving players far more options than other gaming mediums could hope to offer.

So snag up that PlayPack, 73 games is more than worth $9.99 a month and now with the hottest titles’ multiplayer available that deal can only get sweeter.



SOURCE: OnLive Informer.

Wednesday, 27 July 2011

Saints Row: The Third up for Pre-Order on OnLive

If you buy Saints Row: The Third on OnLive through August 25, 2011 (8:59PM PDT), you are eligible for a free OnLive Game System, shipping, handling and tax not included, or a free Full PlayPass to one game. Those who purchase a Full PlayPass for "Saints Row: The Third" by November 14, 2011 (08:59PM PDT), will receive Professor Genki's Hyper Ordinary Pre-Order Pack. Saints Row: The Third releases on OnLive, day-and-date with consoles and PC, on November 15, 2011.


In THQ's Saints Row: The Third years after taking Stilwater for their own, the Third Street Saints have evolved from street gang to household brand name, with Saints sneakers, Saints energy drinks and Johnny Gat bobble head dolls all available at a store near you. The Saints are kings of Stilwater, but their celebrity status has not gone unnoticed. The Syndicate, a legendary criminal fraternity with pawns in play all over the globe, has turned its eye on the Saints and demands tribute. Refusing to kneel to the Syndicate, you take the fight to a new city, playing out the most outlandish gameplay scenarios ever seen. Strap it on.


Saints Row: The Third sells for $49.99 on OnLive. It will feature single player and multiplayer game modes. The game will support mouse and keyboard, as well as gamepads.

Join the Frag Dolls Multiplayer Madness this Friday via OnLive

Brush your teeth and comb your hair fellas, as this Friday you got a couple of hot dates. That’s right we said dates and this time it won’t cost you any money to spend some quality time with these ladies. You see OnLive and Ubisoft will be hosting a Multiplayer Madness event with the world famous Frag Dolls this Friday from 8pm-11pm Eastern standard time.

Glitch, Fidget, Siren, Rhoulette, Phoenix and Brookelyn posing for the picture
Ok for the few of you out there saying a Frag What? Let me give you a little knowledge on these ladies. The Frag Dolls are a team of professional female gamers recruited by Ubisoft to promote their video games and represent the presence of women in the game industry. These gamer girls play and promote games at industry and game community events, compete in tournaments, and participate daily in online gamer geek activities. Started in 2004 by an open call for gamer girls with competitive gaming skills, the Frag Dolls immediately rocketed to the spotlight after winning the Rainbow Six 3: Black Arrow tournament in a shut-out at their debut tournament appearance.

The Frag Dolls have competed and won numerous tournaments including the 2004 Electronic Gaming Championship in Splinter Cell: Chaos Theory, 2006 World Series of Video Games in Ghost Recon: Advanced Warfighter, Winter CPL 2006 in Rainbow Six Vegas and Guitar Hero 2, the Major League Gaming circuits from 2005-2008 in Ghost Recon Advanced Warfighter, Rainbow Six Vegas, Rainbow Six Vegas 2 and Halo 3, and numerous online tournaments. The team’s tournament accolades include first place finishes at Winter CPL 2006 in Rainbow Six Vegas, at multiple years of the Penny Arcade Expo in Tom Clancy titles, a 9th place finish out of 116 teams in the Penny Arcade Expo Halo 3 tournament in 2008, and 11th place overall in the Major League Gaming 2007 season in Rainbow Six Vegas making them the first all-female team to make Semi-Professional status in Major League Gaming history.


Needless to say these ladies kick some serious bootie, and this Friday they are coming to OnLive to play against you and anyone else who dares challenge them to a few rounds of Ubisoft’s newest OnLive release in Assassin's Creed Brotherhood. OnLive has stated that multiple prizes will be awarded throughout the event as well as two autographed OnLive game systems.


There are already five Frag Dolls that are currently members of OnLive. You can easily get to their profile on OnLive through the showcase section of OnLive or if you click on this direct link to the Frag Dolls profile. Through the Frag Dolls profile you will be able to spectate the girls while they play on OnLive.

OnLive Player Tag: BrookeLynFD
OnLive Player Tag: glitch FD
OnLive Player Tag: Rhoulette
OnLive Player Tag: SIREN FD
OnLive Player Tag: ValkyrieFD
So if I were you I would spend the next few days honing my skills as you only get to make a good impression once, and these ladies don’t hang with loosers. Watch out Brookelyn, i’m coming for you. I’ll be the guy sporting an entire can of AXE body spray.

SOURCES: OnLive Informer, OnLiveFans.

Homefront on OnLive Restored, Voice Chat Added

For the last couple of weeks, Homefront on OnLive has been a rather wacky place. Between cheats enabling god mode and flying, excessive creation of aimlessly running bots to rather strange issues concerning “monkey pawesque” glitches. Homefront has once again been patched and all of these quirks are gone leaving us with a functioning voice chat as a bonus.

Although it’s bittersweet to see it all go (fly matches in private servers anyone?) it’s for the better as we are left with a functioning Homefront that doesn’t frustrate with unfair matches or unplayable glitches. Homefront seems to be back to its former glory and more.

These, however, are not the only improvements that come with the latest OnLive Homefront update. OnLive & THQ also appear to have fixed the problem with controls not saving. Now control settings should be saved from the last time players played the game.


As you can see from the image, we can see that OnLive is creating channels for the different games as opposed to using in-game voice chat options. Below you will notice that Homefront has two channels that are available, “Squad Chat” which is exactly what it sounds like and “Open Mic Intermission”. “Squad Chat” is limited to squad mates only as opposed to your entire team, although this has the advantage of one team becoming multiple, cooperative teams. “Open Mic Intermission” only shows up in between matches, allowing you to freely chat with anyone currently on the same server.


Unfortunately Homefront is plagued by the bane of multiplayer voice chat; squealing/whiny teenagers and crappy music no one wants to listen to (no really we don’t want to hear it, I don’t care how cool you think it is). Although unfortunate, it’s the nature of the multiplayer beast and something we’ll have to cope with.


The convenient thing about all of this, is that it works exactly the same way that voice chat beta has been working for spectators. Each channel can be navigated through like menus or folders, allowing for mute control or just a good way to view who’s in channel at the time. When in game simply hold down the “I” key to talk (voice chat is automatic for Microconsole users using a Bluetooth headset). The ramifications of this setup should be very clear. Using channels much like “Team Speak” employs, perhaps functions such as the ability to join channels as indicated by the frequency. This could mean cross-game chat, making OnLive a much more connected community than it already is.

Although it’s all very exciting it would be nice to get these features implemented a little bit quicker. Homefront released on March 15th and now, 4 whole months later we are just now getting voice chat in multiplayer. It would seem as though they were working on one game at a time. There is hope of course, F3AR had been out barely a month before getting voice chat and Homefront has seemed to be a formidable foe for the engineering team at the cloud to begin with.

In any event Homefront is “normal” again and now squads can function more effectively as a team with “Squad Chat”. Hop on in and enjoy some frags with either your PlayPass, PlayPack or free trial!

SOURCES: OnLive Informer, OnLiveFans.

Tuesday, 26 July 2011

Mova CONTOUR: The Hidden Potential Of OnLive

When I finally made the decision to take the plunge back into PC gaming late last year, I took some time to actively consider OnLive. The concept was appealing at first blush. No need to worry about a new graphics card, or even a new monitor. Just plug any PC (or netbook, or Mac) into your HD TV, sit back, and play PC games. All the hard work is done in their servers remotely, and all I'd need is the right size internet connection and, whiz bang, I'm back in the saddle again. Hands down, this would have been the least expensive option for me, but I just couldn’t do it.

The problems for me were two-fold. First, there was a critical mass of users on my primary platform, the Xbox 360. I knew damn well there was a group of users just as large, if not larger, on the PC. These were my friends and GWJrs alike, and to enter the OnLive space would have been to leave them behind. Part of the joy I have of gaming is multi-player, and I was not confident that I would have the same experience inside such a small community as OnLive.

Second, there was the catalog. While the 360 has many exclusives, there were many games I couldn't play because they were on the PS3 or the PC. To take the leap into OnLive would have given me access to dozens of PC-exclusive titles, but as far as limiting my library it would only have exacerbated the situation.

And so I made the decision to build my own PC, because the best gaming experience lay there and there alone. But I can look into the future and see a time when OnLive and its partners can contribute to gaming in real and surprising ways.

You've made it this far into my diatribe, so a little full disclosure. Once upon a time ago I did business with OnLive. Nothing I say here will have anything to do with the NDAs I may or may not have signed, and know that what I say here is entirely my own opinion and not that of my former employer. In fact, every bit of background knowledge I reference here can be found in OnLive’s press conference from GDC in 2009.

One thing that OnLive has going for it right now is momentum. Even before their service went live, they had most of the major publishers, from EA to Take Two, on board. Why? Two reasons. The chief reason a publisher would want into OnLive is the elimination of the piracy tax. If your game never leaves a proprietary data center, if the only element of gameplay forced down to users is a video feed, then you can’t pirate the game. I can also envision a scenario where porting the game code into the OnLive cloud is cheap, if not entirely free, in terms of development costs.

Another sign of momentum is the increased size of their catalog. OnLive now offers 100 games for sale or rent on their cloud system, and regularly has tournaments, contests, and Steam-like sales. If this isn’t actual momentum, at least it’s a good approximation of momentum. It appears just as attractive to some gamers as Steam or a new PC, and the multi-platform availability (TV, PC, Mac, netbook, and eventually phone) has a target market out there in the wild.

So what is this momentum building towards? Well, making OnLive the fourth console. 360, PS3, Wii, and OnLive. What I’m saying is that OnLive intends to leave traditional PC gaming in the dust, to relegate it to obscurity. And here’s how they might be able to do it.

If you dig a bit into the structure of the OnLive family of companies you see that they are a subsidiary of a company called Rearden Companies. Rearden was founded, and is currently owned by, one Steve Perlman. He’s not some gray-haired Silicon Valley investor, he’s an old-school computer engineer of the first order. His CV goes all the way back to Atari, and he happened to create WebTV (the design team behind which ultimately birthed the Xbox360), had a hand in developing QuickTime for the Macintosh as well as modems for Sega and Nintendo. He has all the knowledge and experience to have real, hands-on input into most of the aspects of the technology that makes OnLive work. This makes him fairly unique among platform owners.

Steve Perlman and model in front of Mova Contour reality capture system
Another Rearden subsidiary is Mova. This is the same company that did CGI work for The Curious Case of Benjamin Button and, to quote a recent GamesIndustry.biz article, the latest entries in the Pirates of the Caribbean, Harry Potter and Transformers series. Mova seems fairly secretive, but if you hit their site you can learn what you need to know about their motion capture technology.

You’re all probably familiar with motion capture. An actor, usually complaining of both heat and embarrassment, dons a black leotard studded with golf-ball sized dots, or has a series of dots plotted directly on their face for close-ups. They then proceed to pantomime a scene in front of at least three cameras. The computers triangulate the dots, and a virtual 3D skeleton appears on the computer screen. Animators then “skin” this wire frame with customized art. Mova's CONTOUR does it differently. From their whitepaper, available on their Web site:

"Rather than capturing a sparse set of dots in a scene, CONTOUR captures entire 3D surfaces. For example, with markers it might be feasible to capture a 3D 'constellation' of up to 200 points on a human face. With CONTOUR we capture over 100,000 3D points on a face with 0.1mm precision, and while we are doing it, we also capture the visual image of the face as it is lit in the scene. With so many points captured, the face no longer looks like a constellation of dots; it looks like a photoreal face, just like one you’d see captured by a POV [point of view] motion picture camera. And there is good reason for this: We are capturing surfaces in 3D volumetrically (i.e. in the round), with similar resolution as a conventional motion picture camera records scenes in 2D from a single POV. Effectively, this makes it possible to shoot in 3D without compromising the realism that we expect when we shoot in 2D.”

From clues scattered throughout this paper it seems to have been written in 2006. I can only imagine that their technology has improved somewhat since then.

Reading into recent press releases, interviews with industry publications, and yes even their GDC 2009 unveiling press conference, you can begin to see that Mova’s plan is to remove some or all of the need for traditional digital-art workflows. Imagine an entire character design department that can be replaced by a Mova device. The personnel costs, which make up an inordinately large part of AAA game design teams, are cut dramatically. Scaled up and used to its imagined potential, what you have in a Mova device is the digitizer from Tron. Push a button, and you are in the game. Push a button and a chair, table, sword, machine gun are in the game. Modeling items, as well as people, becomes a trivial matter.

Imagine a game where modeling and animation are trivial, where photorealistic faces, deformable surfaces like cloth and skin, are taken for granted. This would free game designers to focus their attention on all the other elements of game design, like physics, artificial intelligence, or the overall size and scale of the game world itself. Imagine what a company like Bethesda could do for Elder Scrolls 9 if all they had to do to create characters and items was scan in some actors and their clothing using a Mova device.

Now, imagine the kind of computer you would need to run a game like this. If this future comes to pass, then it’s possible that the only way to do it affordably would be to run that game remotely inside the cloud, where multiple high-end devices would simultaneously pound out the polygons you’d need to be able to see and experience the game. And, using the OnLive technology, the only piece of information you would receive would be the audio and video feed.

It’s about now that you’re rolling your eyes at me. That’s OK, because I have every expectation that this will never happen. It would be a giant shift from the way that games are made and could, potentially, merely lead to the “full motion video” debacle of the nineties, where every game devolved into a grainy video stream with choose-your-own-adventure gameplay. I don’t want to see that happen again. But understand that even if OnLive fails, or is bought and reduced to an add-on for AT&T cable subscribers or hotel chains, Mova is a separate entity. It seems to be playing a much longer game than even OnLive is.

So, it’s time to stop debating if OnLive will be “the one console” or the one console in every Motel 6. The real company I’m interested in is Mova. They’re the only people I can think of that might prevent me from buying my next high-end gaming PC.

SOURCE: Gamers with Jobs.