Pacific Connection(英語)

Internet2: Re-inventing the Internet

LeRoy Heimrich, a professor emeritus at Stanford University's School of Medicine, stands in front of a computer displaying the stark, stereoscopic image of a human hand and arm, detached from a torso, placed vertically on the screen. He presses a key and the image rotates 360 degrees, the open palm eerily giving way to knuckles and fingernails, then coming back into view. He presses another key, and the skin looks as if it were scraped away, revealing muscle. Then the muscle layer is gone, revealing bones and tendons. Heimrich zooms "in" toward the bone marrow, then back out to the skin.

The image looks so realistic, you feel as if you could shake hands with it. That's no surprise, for this is no computer-generated image, but a sequence of digital photos-of a cadaver. What is surprising is the fluidity of the rotation. Heimrich and I are in an office building housing Stanford University's SUMMIT laboratory, which develops teaching tools for future doctors. But the 500 1.5MB images comprising the hand are resident on a server at the University of Wisconsin, 1,700 miles east. How can images that size be displayed over a network so quickly? All you need is a very fast wide-area network, a "super-network," and Stanford, like some 200 other U.S. universities, is linked to one.

"When the network is running at peak, which it mostly is, the hand rotates in near real time," says Pavati Dev, SUMMIT's principal investigator and associate dean at the school. "But if you drop down to a conventional connection, the movement becomes very jumpy. That's happened to us a few times when thunderstorms have taken out the lines. When you loose the high-speed backbone, the application is really not useable."

The virtual hand is an example of the work being done by affiliates of Internet2, a Washington, D.C.-based consortium of universities and companies. Internet2 is probably the world's largest high-performance research and educational organization, with more than 3 million people having access through their institutions. While more and more Internet users have discovered the advantages of cable and DSL connections over dialup, some academic and government researchers are experimenting with networks so fast, they leave conventional "broadband" connections in the dust. In that sense, Internet2 resembles the original Internet-which itself began as a government project and expanded to universities before migrating to end users.

Most work by Internet2 members is done over the Abilene network, which was developed by the University Corporation for Advanced Internet Development (UCAID) in partnership with Cisco Systems, Juniper Networks, Nortel Networks, and Qwest Communications-which donated the capacity. Indiana University provides administration. But Abilene and Internet2 are only part of the picture for high-speed networks. Comparable projects in Asia include the Japan Advanced Internet Research Consortium (JAIRC) and the Asia-Pacific Advanced Network (APAN). Other super-networks are in Europe and South America. (http://international.internet2.edu/partners shows a complete list of Internet2 partners ). Like the commercial Internet, all use TCP/IP as the common data transport service; most are interconnected.

While the networks are not new, they continue to be expanded and their transport speed continues to rise. But the applications that take advantage of the speed are still mostly in development. "These networks are like Petri dishes-rich, fertile environments in which lots of things can grow," says Greg Wood, Internet2's director of communications. "Some of this will turn out to be penicillin. Some will be death cap mushrooms."

Some commercial sectors assume that, sooner or later, the speed will be available to businesses and consumers. And some companies view super-networks as a good way to test new products, either because of their speed, their isolation, or both. "A lot of the collaborations are taking place between university researchers and companies. Microsoft Research, for example, can test its new collaboration software ideas in this environment in a way you couldn't on the commercial Internet. Cisco revised its code for its Multicast technology and ultimately developed a version for the national backbone network." The network has also been used as an isolated test bed for IPv6-the next generation of IP protocols-developed within the Internet Engineering Task Force (IETF), the standards-setting body for the Internet.

Conferencing: the killer application

If current use by Internet2 members is any indication, the killer application for super-networks will be remote teleconferencing. Teleconferencing is hardly new, but given sufficient throughput, the picture approaches DVD quality.

Conferencing is especially appealing to academia, where airline tickets, hotel rooms, and face-to-face gatherings are a way of life. Businesses executives travel too, of course, but most corporate projects usually take place under the same roof and are confined to a single company. Even if you need high-performance conferencing, you can set up a point-to-point link-you don't have to deal with the messiness of going across companies or bringing in people on an ad hoc basis. But academic researchers are spread out. The group of experts within any discipline are seldom clustered at a single institution and seldom remain the same. To connect them, you need a universal network of networks-the Internet-but one that delivers significantly greater speed.

It also helps that academic researchers have a greater tolerance for the inevitable glitches that go along with an experimental system. In that sense, their requirements are the inverse of commercial users, who are paying primarily for reliability, secondarily for speed. And even so, glitches are getting rarer. Recently, a stream of uncompressed HDTV video was sent over the network at 1.5 Gb/sec from Seattle to Arlington, Virginia, near Washington D.C. The stream ran for an entire day with no loss, demonstrating the reliability of the network not just at the backbone, but at the regional and local levels.

Of course, teleconferencing equipment has been around for years, and yet people habitually hop on airplanes and meet in person. Many argue that pixels on a screen are no substitute for a face-to-face meeting. Even Internet2 affiliates, who had the world's best transport mechanism for teleconferencing at their fingertips, habitually met in person. Then came 9/11, and suddenly, fewer people wanted to fly. "So we decided to try and use the technology we had been developing to meet virtually," says Wood. "We actually had a record number of people in a virtual international meeting, using everything from TV quality video conferencing to DVD-quality video streaming. It has proven a useful exercise, not only for getting people together, but as a pressure point for advancing the technology." Now a teleconference among Internet2 affiliates is held monthly. The meetings have included both traditional technical sessions, and some radical departures, such as a dance performance from the University of Southern California.

Visible thought and virtual surgery Super-speed networks have attracted a number of medical applications. For example, a proposed National Digital Mammography Archive would provide a central repository for images to help diagnose and fight breast cancer. If every breast X-ray in the U.S. were archived, the traffic alone would justify high-speed networking, with traffic estimated at 28 terabytes a day. Another project underway provides near real-time brain mapping. Data gathered from an MRI scanner are processed on a CRAY T3E supercomputer and viewed as a 3D image of the brain in near real-time. The image enables specialists to "see" a person thinking and help diagnose such disorders as schizophrenia, amnesia and epilepsy. High-speed networking permits the scanning to be done at a remote location.

At the Stanford School of Medicine, the virtual hand is already in use as a teaching tool-anyone on the network can use it. To perceive the 3D effect, students use LCD eyeware (from StereoGraphics Corp.) that allow each eye to see the appropriate view from a pair of stereo images. The effect is startling, especially beneath the skin. "The hand is a prototype for how visuals of other organs and body parts could be developed," says Professor Dev. The group has also developed animations relating structure to function, showing, for example, how a finger flexes and extends normally, and what happens if a nerve is lost or a tendon is cut.

A more ambitious project at Stanford, called Surgery Workbench, is still in the research stage. The tool will be to surgeons what a flight simulator is to pilots-a place to learn from your mistakes without life or death consequences. Using custom-built input devices, medical students interact with an onscreen model of a female pelvis. "Each surgical manipulation corresponds to specified behaviors of the model," says Dev. "So if you push the anatomy with a probe, you deform the surface, but you don't break polygons, as you would with an incision."

To test your skills as a budding surgeon, you slip your fingers into the loops of what look like a pair of scissors-one for each hand. The two devices are each mounted on rods that can be pushed, pulled and angled, thereby simulating a laparoscopic tool. The choice of virtual tips includes forceps, probes, scalpels, small scissors and grippers. For added realism, the input tools were built by Immersion Corp., which specializes in tactile feedback technology, also known as "haptics." "With haptics, you can feel the pop of the needle going through different layers of tissue," says Dev.

The data for Surgery Workbench resides across the Stanford campus at the Stanford-NASA Biocomputation Laboratory, with an eight-processor Sun server providing the visuals to the network. Parallel processing is needed to rapidly solve equations at each node, and the network thus becomes a resource for sharing not just data, but computational power. And even so, only a small area is actually computed. "While the whole graphic is animated, the biomechanics operates only very locally," Dev says. "If you have one pair of tools cutting in one place, you can't have another pair cutting somewhere else. The high-speed network is good for getting the imagery out to us, but its main use here is in delivering these computational resources."

Radio astronomy

Another use of high-speed backbone networks is in transporting large amounts of data gathered from instruments to researchers around the world. One example: the Large Hadron Collider under construction west of Geneva, Switzerland. "Physicists are expecting terabyte- and even petabyte-size data," says Wood. "That requires gigabit-plus transfer rates."

High-speed networking has already become integral to radio astronomy, which often employs a technique called very long baseline interferometry. VLBI makes use of multiple antennas from around the world, all looking at the same astronomical radio object. The data are combined in a processing center, enabling the separate antennas to function as if they were one. "The advantage of having multiple antennas separated by great distances is extremely high resolution of distant radio sources," says Alan Whitney, associate director of the MIT Haystack Observatory, in Westford, Massachusetts. "We can resolve to 10's of microarcseconds-comparable to resolving the dimples of a golf ball in Los Angeles as viewed from Boston, some 2,600 miles away. The resolution is more than 10 times better than the best optical telescopes."

VLBI is not new. But before the use of high-speed networking, each collecting location wrote the data to disk or tape and shipped it to a central processing center, where the media was re-mounted, and the data was combined and analyzed. From the time of observation, the final results took about a month. Now the results are available in near real-time. Whitney says the speed is especially useful for observing rapidly changing events such as quasars, supernova, and gamma ray bursts. "We are also better able to act on our observations, determining whether we want to pursue the object further."

Haystack calls the network version of VLBI "e-VLBI" [as in e-mail]. It involves a massive amount of network data. Data rates for each antenna run up to 1 Gbps, with up to 20 antennas involved. A single observing session can last several days. As with a supercomputer, instrument time is allocated carefully. Scientists, often collaborating internationally, propose projects and, if accepted, are granted access to a set of antennas. Processing may be done at one site or split across many sites. There are only a handful of processing centers, including ISAS (near Fuchinobe) and the Kashima Space Research Center in Japan.

Network backbone speeds vary by region. According to a workshop report summary conducted at Haystack in April 2002, the Internet2 research network provides 10 Gbps, primarily between major U.S. research institutions and universities. Europe has similar plans with its Geant network. Japan has several dedicated high-speed networks that have reached 2 Gbps-mostly over dedicated non-IP links. TransPAC, a network jointly sponsored by Japan and the U.S. connects Tokyo and Chicago at about 600 Mbps. All e-VLBI applications share a "last mile" problem: the slow connection between the research network and the antenna is a bottleneck. Another problem is the continuous nature of e-VLBI transmissions. While most research networks operate at only a small fraction of their available capacity, e-VLBI could use up large portions of bandwidth for long periods of time.

Besides astronomy, e-VLBI is also proving useful for geophysics research. "As a byproduct of the data, the positions of each antenna on the Earth can be measured to within a centimeter," says Whitney. "That enables you to measure tectonic plate motion directly, as well as to see the variations of the rotation rate of the earth. As it turns out, it varies by the day, influenced primarily by the weather. While these measurements were possible before, with Internet2, we hope to do them more cheaply and more often."

Land speed records

Aside from applications, network speed is a constant focus of Internet2 members. The consortium sponsors an ongoing contest, the Internet2 Land Speed Record, which judges entries on the basis of bandwidth and end-to-end distance. At this writing, the current champions are some Dutch and U.S. researchers who, last January, transferred 6.7 gigabytes of data across 10,978 kilometers in less than a minute. The average speed was more than 923 megabits per second, more than 3500 times faster than a typical home broadband connection.

But day-to-day and point-to-point, performance is considerably slower-and raising it is an ongoing area of research. "You have a 10 Gb/sec backbone, a 2.4 Gb/sec regional network, and, say a 1 Gb/sec LAN-so you would expect to get somewhere close to a gigabit/second across the backbone," says Wood. "But not necessarily. You also need to have the operating system turned properly and the user connections set up properly. The MTU size-the size of the chunks of data-must be optimal."

Another programming challenge is to figure out how to get high-performance applications to work optimally. In the early days, application developers just threw things "over the wall" to the network engineers. Now, both groups address the throughput problem together. "Application developers have gotten more sophisticated over the kind of networking required for a particular application," says Wood. "In audio, for example, very low packet loss is almost more important than bandwidth, because of the real-time nature of the application and because of the way Internet protocols handle packet loss. If you lose a bit of information along the way within TCP/IP, you are suppose to retransmit the piece to start the stream again. That's ordinarily not a problem, but you can't get away with it in a live conference."

Commercial potential For those of us without radio telescopes in our back yard, who prefer leaving surgery to the surgeons, will faster networks make any difference? The entertainment studios think so. In the U.S., much-touted video-on-demand services would do very well on high-bandwidth networks, enabling viewers to punch in the programming they want to see rather than waiting for it to appear on a schedule. Super-networks would transform television into an interactive, on-demand experience, just as the commercial Internet is today. Teleconferencing may also have some applications, especially among businesses.

What's less certain is whether this kind of speed is actually deliverable in the foreseeable future. The problem here is not the backbone, but the connection from that backbone to the computer-the so called "last mile." 100BaseT twisted pair is limited to 100 Mbps. 802.11g has a theoretical upper limit of about half that. Even within academia, extending a network is no simple matter. When Stanford's University of Wisconsin collaborator wanted to get on the network, a special connection had be installed from Madison to the La Crosse campus, 100 miles to the northwest. And even within a campus, access may not available. Stanford is now fully wired, but three years ago, Dev's group got only 100 Mbps into her building, and even then, only to certain monitors. When I checked my e-mail in a work room near the receptionist's desk, the response time was no faster than my machine at home. And when the researchers go home at night, they surf the net at conventional speeds, just like the rest of us.

おすすめ記事

記事・ニュース一覧