Pacific Connection(英語)

P2P: Thin-Server Applications

Remember thin client computing? As servers got faster, so it was argued, clients would grow correspondingly less capable, that is, "thinner.: Why put processing power and storage at the desktop when so much of it was available over the network? Thin clients would put IT firmly back in control by using clients that were visually capable---able to handle a graphical user interface---but essentially dumb.. But as a computing revolution, thin client computing never materialized. The fully functional PC, now 20 years old and "fatter" than ever, remains the client-of-choice on most local area networks.

Now the industry, always looking for the next big thing, is looking the opposite direction with a network architecture known as peer-to-peer (P2P). Whereas thin client architecture puts the intelligence on the server, P2P puts it on the client. Thin client is centralized; P2P is highly decentralized. The P2P movement relies on faster processors and cheaper storage, both of which can be found in abundance on off-the-shelf PCs. P2P also assumes smarter wireless devices that can act as active nodes on a wireless network.

P2P has an undeniable cultural appeal to the Silicon Valley community, whose mantra has always been decentralization. This is, after all, an industry that famously challenged mainframes with PCs, and conventional supercomputers with multi-processing systems that use off-the-shelf chips. Peer-to-peer is one more step in that direction. Indeed, the lack of a central control is bound to make IT professionals even more concerned about management, especially about security when a peer-to-peer network extends outside the corporate firewall.

The definition of peer-to-peer computing is somewhat blurry. Most observers agree that it involves clients assuming some of the functions of a server, such as computation, file services and caching. Others apply a stricter definition. And while P2P is not new, it has received added prominence through two types of applications. The first, which might be called "disk-centric," is involved with file sharing, as epitomized by Napster and other services. Disk-centric P2P also includes new forms of file caching to speed Internet throughput.

The second form of P2P is processor-centric, and is best known from the SETI@home project (Pacific Connection, August 1999), which "borrows" unused cycles from computers around the globe in a quest to find intelligent life in the universe. Patrick Gelsinger, vice president and chief technology officer for Intel's Technology and Research Labs, argues that the potential savings of processor-centric P2P is enormous. "A five teraflop sustained supercomputer is the largest built on the planet today," he said in a speech last February. While that computer cost about $100 million, a peer-to-peer supercomputer providing about an order of magnitude greater peak performance could be assembled for less than $1 million, Gelsinger said.

Intel and a startup called United Devices have shone more light on processor-centric P2P through a cancer research project conducted by the Department of Chemistry at the University of Oxford and the National Foundation for Cancer Research in the U.S. Software residing on PC clients changes the shape of a proposed molecule, then attempts to dock it into a protein site---thereby inhibiting the spread of the disease. Successful dockings are ranked by strength and filed for later study.

SETI's director, David Anderson, is the chief executive officer of United Devices, a company hoping to find commercial applications for cycle harvesting. But so far, the only publicized user is Exodus Communications, which is tapping the "United Devices Member Community" to improve the accuracy of its website stress testing---an interesting application but not the one quite envisioned for the technology.

And that raises the question: what is P2P's commercial potential? For some observers, P2P has the earmarks of yet another technology trend that bursts forth with the promise of a revolution, attracts a horde of investment funding, then busts. Lee Gomes, writing in the Wall Street Journal interactive edition, has pointed out a number of setbacks for the technology. For example, he notes that Sun picked up Infrasearch for just $10 million, "a sum that doesn't even register on the Richter Scale of Silicon Valley deals---and then promptly folded it into an existing research-and-development project." Popular Power, another P2P company, ran out of funds and closed last March.

And then there's Napster, which was certainly a hit until lawsuits forced the service to become fee-based. Since then, users have flocked to other file swapping sites including Gnutella, Kazaa, and Aimster, and the swapped files include not just music, but commercial software. Did Napster succeed because it was the first to employ a P2P architecture, or because the site made piracy so easy?

Netscape alumni launch Kontiki

Last August, P2P's credibility as a commercially viable architecture got something of a boost with the launch of Kontiki (known in its pre-launch "stealth mode" days as Zodiac Networks), which reunites four Netscape alumni. Mike Homer, who oversaw Netscape's Netcenter portal operations, is CEO, while Wade Hennessey, who developed My Netscape and worked on real-time memory management at Kalaida Labs, is Kontiki's chief technology officer. The top investor is The Barksdale Group headed by former Netscape CEO Jim Barksdale, and Marc Andreeson, now chairman of Loudcloud, is on the executive advisory board.

Kontiki's technology, called "Bandwidth Harvesting," takes on network caching companies in delivering TV-quality content. Bandwidth Harvesting works, in part, by time-shifting downloads to off-peak periods, and by connecting simultaneously to multiple servers to download different portions of the same file. But in addition, Bandwidth Harvesting also includes "relay caching." Clients requesting a common file first connect with one or more origin servers, and subsequent requests may be fulfilled by the clients themselves. Each receiving client passes the file on to clients that have requested the file as well---hence, your home computer becomes a network cache. Kontiki argues that relay caching reduces the number of Internet hops, as well as the congestion at peering points.

Not surprisingly, traditional caching companies are skeptical. "You really have to own and control the network to guarantee quality of service," said Kieran Taylor, director of product management at Akamai Technologies Inc., one of the best known of the traditional caching companies, in a Wall Street Journal interview. "A stable e-business infrastructure isn't built on teenagers' desktops." Another problem facing P2P is in its definition. Kontiki, for example, has been described as a P2P company---but never uses the term "peer-to-peer" in describing itself.

Or consider Groove Networks, which Bob Knighten, Intel's peer-to-peer evangelist and convener of the Peer-to-Peer Working Group, cites as a commercially viable P2P company involved with project collaboration. Groove Networks was founded in 1997 by Ray Ozzie, best known as the creator of Lotus Notes. The technology's early adaptors include the pharmaceutical company GlaxoSmithKline, Raytheon and the U.S. Department of Defense. But is Groove 1.0 truly a P2P technology? In an April press release, the company states: "Although Groove software technically can be used without servers, operating in a pure "peer-to-peer" manner when communicating with other computers using Groove, the product is dramatically more useful to enterprises when combined with the capabilities of the server-based Groove Network Services."

Another commercially viable P2P application, says Knighten, is the grid computing movement. "These people are interested primarily in using the spare cycles on high powered computers already located in labs---as a replacement for supercomputers." The biggest grid in the planning stages is the Distributed Terascale Facility, a $53 million project funded by the National Science Foundation that will link four U.S. research centers. The resulting platform, due for completion in 2002, will be more than a thousand times more powerful than IBM's Deep Blue. It will run Linux, incorporate 3,300 Intel Itanium McKinley chips, and be linked by a Qwest 40-gigabit-per-second network.

But Rich Hirsh, deputy division director for the NSF's Division of Advanced Computational Infrastructure and Research, disagrees with Knighten's characterization. "The grid is certainly an example of distributed computing, but it's not peer-to-peer," he says, The reason: the four machines comprising the grid are fully dedicated to specific projects. "We're not culling excess cycles. There are never any excess cycles---the ratio of applications to available computer time is roughly 2.5 to one. I'd characterize the grid as four local clusters, linked with biggest pipe you'll ever see." Hirsh says the same holds true for the EuroGrid, which is funded by the European Commission: the distributed platform is dedicated, with no extra compute cycles to spare.

Is there such a thing as a "pure" peer-to-peer network which contains no dedicated server? Gnutella qualifies. The initial search requires the IP address of a client already on GnutellaNet. That link provides the IP addresses of other connected clients. Gnutella's architecture is not necessarily more efficient than Napster's, but it aims to be more "lawyer-proof." As the website puts it: "Gnutella is nothing but a protocol. It's just freely-accessible information. There is no company to sue. No one entity is really responsible for Gnutella." Talk about decentralization.

But in considering the commercial potential for P2P, a looser definition is more apt--any network in which the clients take on the role of a server. Perhaps P2P should be called thin-server computing. "Peer-to-peer says that the boundary between the client and the server is no longer that clear," says Li Gong, director of engineering, peer-to-peer networking, at Sun Microsystems. "It's blurred and depends on what sort of device you have, how powerful it is, and how you want to use it. Client/server is predefined. No matter what you have, you are a client, and no matter what I have, I am a server, because that's how the IT department set us up. In the future, the whole relationship will be more fluid, meaning that we can switch roles on the fly. We could have the same kind of device, but I might decide to open mine up to share with others, or even provide services to other people. That means that my device is now a server."

Gong says that client/server and peer-to-peer will be complementary, with hybrids. Some peers in a peer-to-peer network could act as "super-peers"---acting more like servers than clients, but not dedicated as such. "We don't take a purist point of view that says everything has to be peer-to-peer. The world is complicated," he says.

Intel's Bob Knighten says that, in the future, wireless devices will take on this dual role. "Ten years from now, PDAs will probably have a gigabyte of memory and a 10GHz processor. And they'll always be connected to the 'ether'---talking to one another in one way or another, most of the time. In particular we'll see the situation where the proper computing resource, be it storage, compute power, or whatever, will largely get farmed out in 'the cloud.' As a user, you won't need to know where these resources are or how services will be transacted. They'll just happen." Knighten predicts that within a year or two, high end digital cameras will come with wireless connections and a built-in GPS. "You'll take pictures and if you like one, you'll immediately download it together with information about when and where it was taken, to your home computer."

Distributed standards

However you define it, P2P computing will need a set of standards if it is to succeed in the commercial market. Given the technology's decentralized structure, it's not surprising that at least two groups have arisen: The Peer-to-Peer Working Group convened by Knighten, and Sun Microsystems' Project JXTA. The former is structured as an industry consortium, whose larger members include Hewlett-Packard, Hitachi, and Fujitsu PC Corporation. The latter structured more as an open source project with its own set of "contributors."

JXTA is short for "juxtapose," as in side by side, recognizing that peer to peer will most likely complement, not replace, client-server networks. Bill Joy, Sun's chief scientist and co-founder, says that the project fulfills a long-term vision. "I wanted a computing model based on the systems approach from UNIX platforms, the object-oriented, portable code capability from Java technology, and the universal syntax for describing portable data from XML," he said in a statement. "So, we started Project JXTA, which has become a platform-independent, language agnostic, Open Source technology to enable new and innovative distributed applications." JXTA is being developed as an Open Source project under the Apache licensing model. The website is hosted by CollabNet, Inc.

One part of JXTA attracting attention is a proof-of-concept, open, decentralized, peer-to-peer network called JXTANet---a competitor to GnutellaNet. "In contrast to Gnutella, JXTA was designed for a multiplicity of purposes," as opposed to Gnutella, which was designed strictly for file sharing, writes Kelly Truelove, founder and CEO of Clip2, on the OpenP2P.com website. Truelove says that Gnutella is essentially one protocol, while JXTA consists of several. "This difference is considerable and hard to overstate. A slightly exaggerated analogy: if Gnutella were a pocket calculator, then JXTA would be a PC." He says that the services provided by JXTANet, including peer discovery, group membership pipes and monitoring, provide a rich foundation for higher level applications.

With the PC sales, Internet commerce, and corporate revenues all slumping, P2P could be the next big thing or simply more vaporware. Aside from a few large corporations doing pilot programs, most P2P companies are startups, meaning they have yet to make money. And while operating in the red is understandable, venture capital firms have shortened the "burn rate." And so for the time being, P2P remains an interesting experiment in which PCs become servers. Akamai is right: peer-to-peer means the server network is in teenagers' bedrooms. But as the record industry knows full well, you should never underestimate the power of a teenager.

An Interview with Li Gong, Sun Director of Engineering, Peer-to-Peer Networking, and head of the JXTA engineering team

Li Gong was born in Beijing, China, and received a PhD in computer science from Cambridge University. Working at Sun for the past five years, he first managed the Java security and networking group, and then the home networking group. Gong is a Sun Distinguished Engineer.

What does JXTA bring to P2P computing?

In the last year or two, lots of startups have been trying to break into this field. But every company is promoting a different style of application with a different interface, implemented with a different software stack. The world is becoming more fragmented, and that's counter to the spirit of what the net is all about.

The Internet has succeeded because it's a single standard.

Right. So Sun saw last year that there needed to be a common development layer residing everywhere.

A global P2P standard?

Right. This is where Sun decided it could help, because we've been pushing open standards for a long time. We started the JXTA program by working on a thin layer of P2P foundations that we think should be the common layer for all P2P software. In doing this, we've gotten the feedback of many startups in the P2P space. We also decided that Sun should not push this standard alone---it should be an industry-wide effort. So we've made it an open source project and invited everybody who is interested to contribute.

So you are developing a standard layer above, say, Gnutella.

Gnutella and other services are not designed to be this universally available layer. They are designed to solve a problem. They are applications that began with the question: say you want to share a file; what's the best way to do it? Gnutella does the same thing Napster does, but by avoiding having a central server. But we see the P2P space more broadly, in better utilizing bandwidth, as well as information discovery, acquisition, and storage. And of course, utilization of spare computing cycles.

What do you mean by information discovery?

Discovery encompasses the whole cycle of how you deal acquire, store, index, search, and use data. Discovery is important because the ratio of the information that the world generates, versus the total information that we publish each year is 106 to one. For every megabyte of information that we generate, only one byte is published. Then you look at how much information Google can search for. If you go any portal, you then discover the information that you can reach by going through them is just minimal compared to the information we could publish.

How will wireless devices benefit from a P2P?

There are any number of scenarios. Suppose we are communicating by our devices and you tell me I should call someone. How do I get the number? What if you could just push a button and zap from your device directory to mine? Or perhaps we share a buddy list for chat services. Or perhaps we're on different chat systems and can still "discover" each other. Perhaps your RIM pager is my "buddy," so whenever we're in the same area, I discover you. It's theoretically possible because whoever operates the network is in the perfect position to know where your device is and where my device is.

Some Wall Street observers have said peer-to-peer is just the next big buzz---interesting, but with no commercial potential. They ask: if Sun really thought P2P had commercial value, why make the JXTA project open source?

That's one impression, but let me give another. JXTA is bigger than peer-to-peer. People in the company have a much grander vision in terms of longer term how computing evolves.

Here and now, we decided that the only way for JXTA to make it in the market in today's world is to open source it. The only way that such a standard will be really accepted by all the fair and honest parties is to open it up.

You invented Java, put it in the public domain, but how do you make money out of it? How will you make money out of JXTA?

If you look back five years ago, Sun was primarily a desktop workstation, a small scale server company. When the world moved from desktop to client/server paradigm, Sun moved too and became a server company, and Sun has done very well.

So you're saying by inference that being on top of a peer-to-peer standard...

....Sun will be in a better position to understand what's coming and to make sure that our hardware and software solutions are geared up when things happen. Even when we became a server company, we didn't throw out our workstation products.

Some have argued that security is an Achilles heel with peer-to-peer. If I beam you part of my directory, what's to stop you from going into the rest of my directory and pulling out confidential numbers?

I don't think that peer-to-peer has presented many new security challenges. The main problem is that there are so many security techniques that go unused. The complaint among security experts that "people don't use what we invent." Last year I co-edited a special issue of IEEE Computing, and did a guest editorial lamenting this. So there are lots of security options out there waiting to be used. The whole question is what you put in your commercial system, and in the correct way.

But there's a perceptual problem as well---you have to convince people that their client will be safe when it acts as a server.

Right. There's a social problem and a perception problem. If you look at Microsoft Outlook, which has been hit with lots of security problems that, in fact, transform your PC into a mini-server: meaning that you are getting a piece of email that will trigger some action on your part. But not a new problem.

How do you view P2P and JXTA in Asia?

China has already surpassed all other countries in terms of mobile phone numbers---they had 120 million mobile phones by the end of June. Japan is one of the pioneering countries in the wireless mobile phone world, so that's an area where lots of new interesting applications will happen---including peer-to-peer.

Some people argue that the reason mobile phone usage is lower in the U.S. is that the wired phone system works reasonably well. That same argument applies to P2P. In China, that technology may be more welcome because they lack our computer infrastructure.

Are you saying that there are fewer computers in Asia, therefore the drive to tap unclaimed cycles is stronger?

It actually applies to all three aspects of P2P--- innovation, discovery, the utilization of the bandwidth. Whereas here, supply exceeds demand, China has more motivation to get the most out of its resources.

おすすめ記事

記事・ニュース一覧