Gamers will literally be able to dive into the realistic world seen in large screen movies

I have been looking at the rhetoric surrounding console launches. This is, for real, from Sony’s 2005 press release announcing the Playstation 3:

Gamers will literally be able to dive into the realistic world seen in large screen movies and experience the excitement in real-time.

The same old drab way of selling a new console, of course, promising better graphics (joy!). But spot the really wonderful mistake of using literally as a modifier for emphasis: The selling point is not that games will feel more like movies, but that diving in is not a metaphor at all! You will literally be jumping headlong into your television set. Wow! Can’t wait!

PS. On the other hand, since we all so massively seem to agree that realistic 3d is not the way to go, I am beginning to hope that someone will actually stand up for 3d photo-realistic graphics. Any takers?

6 thoughts on “Gamers will literally be able to dive into the realistic world seen in large screen movies”

  1. I’ll stand up for photorealism when we get there, and even then – only as a glob of paint on a huge palette.
    Trouble is, we’re so far away. Any and all attempts at photorealism fall straight into the uncanny valley, and will continue to do so for the foreseeable future.

  2. I agree largely with lemon, but I feel the Uncanny Vally is more of a concept than a real problem artists or anybody in the industry are facing.

    Photo realism will happen a year and a half after the first dedicated commercial realtime raytracing graphics engine is released in the form of a playable game.

    Microsoft won’t do realtime raytracing with the XBox 720 because it will be more risky and expensive than taking advantage of the progress they have made with DirectX 10. However the Cell Processor is ideal for the task. I have seen three PS3’s outputting some awesome visuals, but then the question becomes, how forward thinking is Sony with the PS4?

  3. *BOINK* They lied!

    I’m guessing by “dive into” they mean “move the SIXAXIS controller along the Z-Axis”. It’s all well and good, but Lair showed us (once more) that diving in is nice, but being able to swim is more appreciated. Mass Effect did the same thing, but in actions to be accomplished and general gameplay instead of control scheme.

    That reminds me of the “ad” on the back cover of Metroid Prime 3: Corruption. It says something along the lines of “Thanks to the Wii Remote’s unprecedented control, for the first time, truly become Samus Aran”. It turned out that I genuinely felt that I was “playing as her”, but not because of the controls, rather thanks to the much more extensive depiction of the universe (Federation marines and frigates, piloting Samus’ airship, meeting other people, etc.).

  4. Brook, in my understanding, raytracing only makes for realistic light on some surfaces on some well-defined objects? You still have the problem of having to model and animate a human figure probably.

  5. Jesper you are right, however I believe that when realtime Raytracing becomes cheaper than Rasterized engines and developers can express their vision in a mathematically described world and not in one ruled by triangles. I believe that they will put the extra effort in to create the whole of a world with every object being just as detailed as another from huge liberties of objects and shapes. By this time fully detailed scanned images of real people can be converted with very little adaptation into a 3D model that is indistinguishable from the real thing.

    The reason why I believe this so strongly is because Realtime Raytracing’s performance is very consistent and doesn’t degrade anywhere as quickly as Rasterized engines do when put under load. RealStorm ( http://www.realstorm.com ) have some excellent demos of this. And once the industry turns it’s attention to Raytracing as a commercial long term technology investment then we will see some huge leaps in lighting and material effects.

    The hardware venders have already positioned themselves, but the market isn’t ready yet.

  6. Sony’s a big electronics company doing what big electronics companies do — confusing looking good with being good. Virtually every piece of software I’ve bought for my PS3 has opened with a protracted moneyshot designed to showcase graphics. Fantastic graphics can add a lot to the gameplay experience, but the game has to combine all the other mechanics into the recipe for the game to actually be playable. I’ve played a lot of great looking games, and I’ve been most impressed by the new hardware/software’s ability to render a basic sense of depth and dimensional parallax, along with great lighting. But, many of them have been absolutely terrible to play — either the sound design was terrible or the control schemes actually worked against the player (MoH: Airborne is the latter case in point).

    And, let’s not forget that the Uncanny Valley is only one issue with the problem of visual realism. Robotic looking faces are one thing. But photorealism clashing with cartoonish violence is something else. I wonder how far mainstream developers will be willing to push the content envelope. How closely will on-screen gore match on-screen violence? Or is this a pandora’s box that game developers (maybe wisely?) do not want to open? When does photorealism clash with the inherent fantasy-laden experience of any virtual or simulated world? All it’s going to take is one terrible school shooting this fall by a kid who just bought and played Grand Theft Auto IV for violence in videogames to become an election topic once again — and despite the efforts of the Jesper, Henry Jenkins, and a whole long list of others, games are still culturally defined as a toy. Photorealism might be a sword with a hidden double-edge.

Leave a Reply

Your email address will not be published. Required fields are marked *