We are already making several baby steps toward mapping the web; we're just not doing it in any coordinated way.
Many search engines provide weak versions of neighborhoods with a "find similar pages" button or with a hierarchical search mechanism. These search engines are moving toward an increasing hierarchization of their view of webspace. They recognize that simply giving users raw searching power over all known websites is not enough; you must also give them some idea of what's where.
Many sites are simply lists of links to all known sites covering a certain topic. Such sites are primitive (text-based) maps yet even this pittance helps to combat the lost-in-webspace feeling.
Many sites try to give their visitors some idea of what's there and what's related to what within the site. There is almost always a button on each page in their site that lets the visitor return to the home page where these resources are listed so that user who jump to an internal page using a search can find their way out to the main page. Many sites now use frames to give visitors an ongoing display of local context. Those sites keep a frame containing pointers to major subdivisions of the site on screen at all times no matter where the user jumps within their site. Users can't get lost in such sites. Of course, as soon as they step back out onto the web, they're lost again.
Many users with large numbers of bookmarks use their browsers' ability to enclose bookmarks in directories to cluster sites they've already visited for themselves. This is a primitive form of mapping but every little bit helps. No one can keep track of a few thousand bookmarks without some kind of map of what's there. Further, what if the web's volatility increases so that even if the user clicks on the right page the page is no longer there, or has been modified?
Many users with large numbers of pages on their computers organize them into categories and subcategories. That organization is a primitive map of where things are. Although users could find any particular page given enough time, simply finding a page among thousands gives no context for that page. Usually, when we work on a page we also want to have related pages close to hand. We could always jump to any page, but most times we don't do so; we usually work within a context.
Besides the obvious suspects like Sun, Apple, Microsoft, Netscape, AltaVista, Yahoo!, HotBot, Infoseek, Lycos, and Excite, various non-profit groups that publish heavily on the web might start sharing their views of data thereby building partial maps of the web. All the world's universities could conceivably create maps of departments, faculty, students, and subjects at all of their sites. The world's libraries could do the same. Such pooling seems less likely for commercial sites, but there might be advantages for some---for example, all the world's realtors, all the world's travel agencies, all the world's sports equipment manufacturers, and so on.