Free Trial

Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.

  • Create BookmarkCreate Bookmark
  • Create Note or TagCreate Note or Tag
  • PrintPrint
Share this Page URL

Chapter Four. Development & Practice > Design Considerations—The Technical Pers...

4.2. Design Considerations—The Technical Perspective

As we learn to replace trees of prebuilt explicitly listed courses of action with procedural rules-based systems, we can imagine ceding far more control to players. Once procedural tools can model a town council (or whatever), which reacts dynamically to events in the world, then designers don’t have to second guess every move a player might make, and instead can write high-level reactions, goals, and behaviors that player actions will drive. But designers have to let go of detailed control of every action, and figure out how to get the most out of the procedural tools computers are good at.

—Doug Church

For someone like a painter or architect, technique has more to do with efficiency and effectiveness than it does with durability. Questions swarm the constraints of the design. Is it networked or does it run locally? How much processor speed does it use? Who are the readers and what do they know? To complicate matters, the process of the development and the conceptual design intersect and often conflict.

This is the thorny space of System Design.

4.2.1. Designing the System

In Austin, Texas, the designers, writers, and developers of IonStorm work toward the second release of their BAFTA (British Academy of Film and Television Arts) award-winning game Deus Ex2. At IonStorm, these designers use a process that is relatively straightforward and familiar to the schools of game design, software design, and cinematic writing. They start large. They begin the project with early meetings. These meetings include the individuals who have the most experience, passion, and investment in the game and start with widest perspective: the global view.

Then they narrow that perspective. When they start a new project, they hold a preproduction meeting and discuss the story, the setting, the characters, and what tools, or capabilities, the reader will have in the game. This is a useful approach because they’re not putting the story ahead of the interaction design or vice versa. They look over the technical implementation. They try to get everyone involved. They try to get both sides of the collective brain—the creative and the technical—talking to each other.

The project begins with a director, a producer, and “discipline leads”—the folks who keep an eye on a practice area such as graphic design or networking—and as the project continues, they snowball more people in as needed. This ensures that the design is straightforward in the beginning, the concentration is focused, and the budget is small. While doing this early conceptualization, they don’t need to be building diagrams of use-case or narrative flow. As with any writing, the process is a distillation. It’s a refinement of the details of the total concept and the imaginative exercises—the visual process—of understanding those details. What is different about the way that they write for interactive narrative is that in these early meetings, they are developing a world view, so the refinement of the details isn’t linear because the way the story is told isn’t linear. Checklists and databases need to be kept, but this maintenance isn’t an issue of writing a narrative art; it’s an issue of producing a world-view perspective.

It’s night at the dock and a boat sits in the water. It’s a relatively industrial environment, and the authors and designers of the story want readers to have room to explore, but finally, the authors want the readers, for the sake of the story and of the gameplay, to get in the boat and drive away. The authors begin this development by looking at what might be done on a real dock, and how to attract attention to the boat for most everyone who passes through this node of interaction.

The end result is a narrative line that allows some looking around and exploration, then a relatively fixed and linear passage to a new node of interaction.[*] Readers need guidance at occasional points to keep the narrative line intact, and, as we’ve seen, these moments of constraint help guide any interaction. Stoplights do the same job. Interaction requires constraint just as narrative requires imagination. If there were no bottleneck, it would become total action and the narrative would run the risk of disappearing.

[*] This is what, in 1.5.3, we’ve previously termed a Nodal story structure and something Smith calls “string of pearls” story structure.

A goal for IonStorm is to see that the pacing and timing of the game are driven by the player, not the author. As Raph Koster terms it, this is an “expressive” approach. It’s more interesting for the player if this is the case. While the author or designer might feel otherwise, Smith notes that “designers are smart to turn over the creation to the player.”

But finding the right balance is important. As any software developer, they check their work as they go. For IonStorm, there are two real tests to their success.

The first test is the user testing. In one instance, a woman was starting the game and rather than take the anticipated route of getting off the dock onto the boat and driving the boat to the nearby city, she instead stayed on the dock and began to experiment with objects there. She threw a barrel into the water. She stood on the barrel. She got back on the dock. She shot the barrel. The barrel exploded.

She jumped in the water and swam under the dock. She got back onto the dock. She followed a rat. At first, this caused some concern among the interaction designers who were watching over her shoulder because they assumed that the cues they had given her to follow (such as the nearby, idling boat at the end of the pier) were missed and, subsequently, her experience was boring. On the contrary, she loved the exploration. Her time in this node of interaction simply allowed her the chance to build a different experience for herself.

The second test is the story testing. IonStorm asks a second person to watch the game as it’s being played by the first person. This observer, removed from the interaction, if still engaged in the experience, is a sensitive weathervane to the movements of the narrative. If the observer’s attention is still on the game without having the experience of control and exploration, IonStorm knows they’ve got a solid narrative structure.

A Good Design

A good design is one that solves more than one problem with a single solution.

In contemporary narrative, when so many forms of design have been woven together, a narrative designer has to weave at multiple looms. The best interactive narrative designers need at least cursory familiarity with interface design, graphic design, interaction design, and information design. Let’s not forget story structure, graphic composition, animation, camera cuts, lighting effects, and the knowledge of the thread of technology used to weave these together. Most designers who are involved in interactive narrative have backgrounds that cover at least three or four of these disciplines. It seems to be a characteristic of digital designers; they cook in multiple kitchens.

But the process of design can be sticky, especially when stirring in so many different ingredients. Fortunately, interactive narrative can be developed by following recipe that, like any discipline, can be spelled out and, if followed, will at least frame the appropriate questions, if not answer them.

As Team ChMan organizes for another of their monthly episodes of Banja, they have to consider more than a single form of design. Starting from the bottom, they have a team of programmers who have developed a proprietary authoring and editing platform named “Epi_Editor.” This is software that allows a meta-form of editing, integrating characters, camera cuts, background scenes, dialogue, and even the iconographic interface that runs along the bottom of the screen as nonplayer characters speak with the reader. These pieces of the design, each one a part of another form of design (illustration, dialogue, color, sound, etc) are all integrated, just as the design of a classic form of narrative, but, in many cases, with far more complexity.

Define the Requirements

The larger a project is, the more requirements it has. Any time multiple millions of dollars are put together, everyone who is working on the project is excited. And then they receive a list of requirements, and morale takes a sharp nosedive. But the requirements—at least for the engineers and designers—serve as a design constraint, forcing some decisions and informing others. But everyone needs to know what the project requirements are at the early stages, so a tradition has developed in most software development areas that has been named a “Software Requirements Document,” or “SRD” as it’s called at Oracle, Microsoft, and AOL, large companies that have long histories of staunch requirements.

Generally, a well-built SRD will contain items that are useful to interactive narrative design. The effort of an SRD is largely administrative, but it’s a worthwhile exercise for a serious project. A table of contents might include, for example, the items in the table at the right.

The Software Requirements Document will be different for different projects, but the basic premise is the same: Define the requirements. This allows the group of designers and engineers working on the project to know what the goal is. If “quality” can be defined as “adherence to the requirements,” then you need to get those requirements on paper so everyone knows what’s “good enough.”

This is important because any design project is never, really, finished; it’s abandoned.

Build the Production Metrics

Any project that may include a large number of people requires measurement of some form if not only for the determination of financial success, for the determination of whether or not the assets (images, text, audio, video, or other content being integrated) and materials for the project have all been completed. Production metrics include several flavors of accounting, but they generally measure what assets are due, when they are due, and how many there are. Generally, this is the role of a project manager.

Document Ownership (author,editor,etc)

Table of Contents


User and Target Segments

Development Approach

- Technology Development Method (do we build, buy or rent)

- Market Analysis (who’s the customer and why)

- Competitor Analysis (who’s the competitor and why)

- Software Features (what the software does)

- Business Objectives (how money is made doing this)

Design Approach -

Use-Case and Workflow (what information is used)

- User Interface and Interaction Design (how people use it)

- Visual Design (what the information looks like)

Examples and Scenarios (specific citations)

Production metrics tell the team what is due when and how many of it are going to be made. The more accurately this can be identified at the outset of the project, the more smoothly the project will run. You need to determine what the production metrics are for a narrative production as if it were being done for a website design, television production, or mobile technology system. The same principles apply.

Determining the Reader

In order to ensure some level of success when the project arrives in the market it’s worth knowing a few things before you begin: Who are the readers of the specific project and what are their experiences, abilities, needs, and concerns?

Who is the Reader?

Determining the reader is the first step. What are their likes, dislikes, attitudes, budgets? How do they spend their time? What magazines do they read? Where do they go on the weekend? What music do they listen to? What television shows do they watch? How old are they? Where do they live? What, in essence, do they like?

It wouldn’t make sense to design a predictable and business-oriented narrative for a slash-and-stash gaming crowd. Likewise, at least a cursory knowledge of the level of technical expertise is needed before launching a design campaign for a machine that only 5 percent of the potential population may use (be it the top five percent with gigabytes of RAM or the bottom five percent with hand-held calculators).

How Many Readers Are There?

The more you know your reader, the better your job as a writer will be.

Determining single or multiple readers—and if multiple, how many—is the next step. First, there should generally be some initial consideration of whether the project is intended as being primarily networked or primarily standalone. Deus Ex was developed as a game that was intended to be played as a solitary experience, and then it was taken online because there were a wide number of capabilities that would map well to multiplayer use. Or so they thought. Once they got the game online, the developers anticipated a large number of visitors and were disappointed when their party never really got swinging. Everquest, on the other hand, is designed specifically as being a large-scale multiplayer game. If you try to play it alone, it quickly becomes evident that the party worth visiting is the online environment. So the form of interaction among people over a network is an important element in the design.

The quantity and frequency of the networked traffic should be known. RespondTV—an interactive television company based in San Francisco—developed backend servers for millions of simultaneous visitors. Knowing that they had to develop software that accommodated four or five million simultaneous users, obviously, guided their approach to their development.

There are two factors that are being discussed in the previous paragraphs: the number of people who are using the physical hardware and the number of people who are using the networked machine. Alen Wexelblat, a long-time VR researcher, developer, and theorist, listed, in 1993 seven items to be considered when designing networked environments. The relevance of the list persists:

  • How much data will be used at the same time?

  • Who will have control over the output and the inputand for how long?

  • How will users communicate with one another?

  • How will users know what other users are doing when?

  • What parts of the system will users see at any one time?

  • How will users know what other users are seeing?

  • How does what others see affect them?

The first question is probably the most significant. The term, “latency” is another way to look at this problem. Here’s the problem: Two users are in the same room and one leaves the room at 12:00. This input is sent to the server and takes one minute to get there, hitting the server at 12:01. The other users in the room will see the door close at 12:02 since it takes one more minute for the output to get to them. Meanwhile, another person in the room decides to leave at 12:01. What happens? Do they have to open the door in front of them? Or is the door already open? How can they tell?

Design Constraints and Balancing Trade-Offs

Multiple readers interacting with shifting conditions gets complicated.

Design constraints guide decisions. One thing to note is that there are two sorts of constraints that we’ve looked at in this book. The first was interaction constraint on the reader; the second is design constraint on the author. Our first example of a design constraint is intended to frame interaction capabilities of the reader and the second is there as a means of development for the author. It’s a kind of buttress. In architecture, one design constraint is the footprint of the building. In movie production it’s the dimensions of the screen.

Design constraints serve as a means of starting the project and actually inform the design. When Michelangelo said, “My lines follow the lines that led them there,” what he meant was that as soon as he puts down one line it constrains and informs the ones that follow it.

There are three primary design constraints we’ll consider here. There’s possibly thousands of design constraints that are worth considering, but these three in particular will remain issues, regardless of technical improvements in the coming decades:

  1. Responsiveness vs. Resolution

  2. Optimization vs. Ubiquity

  3. Customization vs. Design

The first two are technical, and the third is artistic.

1. Responsiveness vs. Resolution

I was in a video game parlor with a 14-year old and her mother. The video games were in the foyer of a multiplex movie theater. We were taking a quick survey of the available game options. For the 14-year old, it was no question: the game to be played was the game with the best graphics. After the three of us left the movie (it involved a large reptile wrecking things) I stood outside and listened to some of the departing comments. “Neat CG, but the plot was a mess.” “Loved the image of...” and then as the crowd was thinning I heard “Sheesh, that was stupid.” And the response of “Yeah, but it looked cool.”

It can’t be left unsaid that movies, television shows, magazines, websites, and video games have all sold marvelously and won small mountains of awards because of their looks alone. But beauty isn’t screen deep. The image is critical, but the interactive responsiveness is also important. Sometimes you will have to choose one over the other because processing speeds are insufficient, network lagtimes are too slow, and so on.

Frederick P. Brooks, Jr., an early pioneer of virtual reality, sees real-time motion and high-quality imagery as a trade-off that will always exist in specific forms of interaction design. His vote is to prioritize them so that objects always move realistically at the expense of everything else. He also mentions that jumps in objects need to be avoided at all costs. This, of course, applies anywhere there is a hit on the rendering calculations of the display device, be it a cell phone, a television, a desktop computer, or Dick Tracy’s two-way wrist radio.

Emphasizing movement makes sense when you consider that the human race has been hunting live animals for many millennia and our bifocal eyes have evolved to first isolate movement, then contrast, then color. This can well be reflected in graphic design with the priority placed on movement—the characteristic to which we’re most sensitive. But responsiveness isn’t simply the way things appear to move. When we move something else, we expect to see a change, and if that change doesn’t happen when we expect it, we feel like we’re involved with something that is broken, vacuumed of life, and stripped of change.

To determine where to put responsiveness and where to put resolution, ask yourself the following;

Q:Where will people need to do the most interaction (is it with objects or the plot of the story)?
Q:What is background and what is foreground (with objects and the plot of the story)?
Q:Where is the focus of the reader’s attention?
Q:What movement is absolutely necessary?

2. Optimization vs. Ubiquity

The web has been successful because so many people can both author and read. This is also the reason why it’s such a bother to use. Not all authors will work with the same goals in mind and, despite ISO standardizations, not all readers will view content through the same lens.

It’s a simple idea, this notion of the ubiquity, lowest common denominators, and simple subtractions, but it’s just not the way the real world works. Author once and view everywhere is not a reality for serious development, so a choice has to be made. It’s generally a ratio: The ubiquity is inversely proportional to the optimization. The higher the level of specification (speed, color, timing, behaviors, network latency, etc), the smaller the audience.

It ends up being an issue of quality versus quantity. There is a rather brutal approach to solving this sticky problem (it’s a method that works well in many cases); and that’s the financial answer. If the money is coming from a large, undedicated crowd, then ubiquity is the best solution. If a high level of loyalty, interest, and quality is needed, then optimization is the best solution. This might also be something to consider from a perspective of personal pride and prejudice; some artists don’t care about accessibility any more than they care about money. Here are some questions to help:

Q:What functionality can be rewritten for different technologies at what cost?
Q:How many people need to see this work as it was intended so that the production costs of the project are covered?
Q:Are there small features that could be modified or excised altogether to make the project available to a larger readership?

3. Customization vs. Design

As we’ve already seen, reader participation is critical. Providing readers with the ability to change the story causes an increased interest. Likewise, powerful visual design is critical. But these two are often in opposition to each other. Allowing readers to change the design will generally wreck it. If you don’t believe me, hand your favorite photograph to the first stranger you meet, hand them a ballpoint pen, and ask him or her to make the photo better.

Despite democratic ideals, people are trained to be designers, musicians, chefs, and architects. And some are more talented, skilled, and educated than others. Our society of specialized labor doesn’t let architects in the kitchen any more than we allow chefs to mix concrete. And, despite aristocratic ideals, interactive systems are structured such that the more people use them, the better they get (such as the web, which has been named fundamentally democratic).

So the initial phases of design need to weigh these conflicting issues. It amounts to determining what changes readers are allowed to make so that they feel invested, interested, and have the chance to make change. Or whether you put your designers in charge, knowing that the beauty and function of an environment outweighs the first-person participation of an individual reader.

One way to solve this problem is to consider where the opinion of the narration lies and decide what affect allowing a user to interact with this opinion will have. Let’s consider this as asking whether the element to be altered by the reader is a component of form or function. In the case of interface design, form includes the color, size, and location of a button. In the case of character design, it includes the color, accessories, and body shape of the character. These are all components of form rather than function, so it’s a safe bet that in a narrative where readers can change the specifics of plot, these are acceptable plugs for interaction.

As Harvey Smith of IonStorm puts it, “Designers are smart to turn over the creation to the player.” The answer also rests in whether the interaction is inside or outside the skull. If it’s inside, then the users should be given a large share of customization and bullocks to the design. If otherwise, trim down the customization features and beef up the visual design.

Q:What parts of the design frame the metaphor?
Q:What parts frame the interaction?
Q:What part of the narrative opinion cannot be changed?
Q:What are the worst changes a reader should be allowed to make?
Q:What changes that they make will increase their interest most?

The Importance of Metaphor

The quality of a good metaphor is determined by its predictability and its internal consistency. Because it’s a relationship of symbols, the information that’s presented needs to have an internal relationship [1.3.3].

The metaphor is what allows the reader to understand the rules of a world. If, for example, these rules are in a virtual reality environment (or simulated environment), allow the reader to understand their capabilities Some interaction designers term it “preconditions,” but, semantics aside, these rules allow the reader to anticipate change, and they provide a redundant level of information that highlights differences.

The desktop metaphor is predictable because most of us know how a desk functions. Most of us understand pieces of paper and folders, trashcans, and the basic process of copying and pasting. But, as we saw from Ted Nelson’s criticisms, the internal consistency isn’t what it might be. It’s worth citing a second time:

“We are told that this is a “metaphor” for a “desktop.” But I have never personally seen a desktop where pointing at a lower piece of paper makes it jump to the top, or where placing a sheet of paper on top of a file folder caused the folder to gobble it up; I do not believe such desks exist; and I do not think I would want one if it did.”

The desktop is a metaphor for interaction. Others that we see today include radio dials for Internet audio listening devices, calculator push-buttons for numeric calculators, paint and canvas for drawing programs, and pages for text (it’s curious to notice the lack of buildings and other spatial metaphors when we see them around us so much).

In summary, the metaphors that are chosen for interactivity are difficult because the interactivity of the computer screen rarely maps to the interaction of the act it’s emulating. The whole reason there is a paint program is so that it can do things that paint and canvas cannot. The metaphor eventually breaks down when taken to its limited extremes. Additionally, metaphor is based on a process of determining what the user does and how he or she understands it. It is a form of compression of information.

  • Safari Books Online
  • Create BookmarkCreate Bookmark
  • Create Note or TagCreate Note or Tag
  • PrintPrint