Los Angeles (CA) - Toward the end of Microsoft’s Professional Developers’ Conference here on Thursday, we sat down with the company’s group product manager for Windows, Greg Sullivan, to chat about the emergence of a "big picture" vision of Windows appearing and working everywhere, not just on PCs, but on handheld devices, the back ends of digital cameras, and the ice makers on refrigerators. Connecting all these seemingly unrelated devices is a standards-driven contextual information sharing system that Microsoft - and perhaps only Microsoft - is developing.
For software developers, hardware engineers, system administrators, and general consumers alike, the concept implied by the phrase "only Microsoft" can be daunting. We’ve seen what can happen, and what could possibly have happened, if "only Microsoft" were involved in developing products upon which businesses depend for their livelihoods. To put it succinctly it’s been a mixed bag. Open standards developers and Linux developers would argue that they’ve created and even invented the methodologies that Microsoft announced or spotlighted this week. But for Linux supporters, the process of connecting those standards has been mired in controversy, arguments, and more than a few bitter feuds - the kind of social enigmas that a corporate culture such as Microsoft’s would never appear to allow. So when we talked with Greg Sullivan at PDC this week, we talked about whether the emerging role of Microsoft as a builder of a massive information network, was real as well as intentional.
Tom’s Hardware Guide : I don’t know how much television you watch, Greg, but I have a favorite episode of "The West Wing" where Pres. Bartlet is teaching all of his aides about the intricacies of international diplomacy over separate games of chess. In one scene, the president puts his hands around the chessboard, and says to Sam, one of his chief aides, "You have to see the whole board." So play along with me, if you would - you’re the president, and I’m Sam. "Mr. President," I think I’m seeing a big picture emerging, tell me if I’m wrong : Office 12 makes XML documents that can be translated using XSLT - an open standard mentioned here this week - into XAML, which produces front-end consoles that can be displayed through any browser, and which can be aggregated using RSS - another existing open standard - in order to derive the metadata that makes them publishable using SharePoint... which eventually becomes imported into Office 12. "Mr. President"...You’re not in the standards-building business any more, are you ? You’re making a big information sharing network that weaves these standards together. Is that the big picture ?
GS : It’s a very astute observation to see the common thread and the themes among these components, because it’s not by accident. PDC 2000, I think, is where we first began to articulate this new strategy, because we’ve been talking about some of these scenarios for 10 or 15 or 20 years. It was 15 years ago that program-to-program communications was through Remote Procedure Calls, and it was RPCs, and we made a bit deal about NT 3.1 having [standards]-compliant RPC mechanisms...Turns out that’s pretty low-level and very difficult, but [what’s important is] this notion of a kind of information, a way for software to talk to software, and to do that in a way that scales depending on the need...At some levels, it can be quick and dirty, it doesn’t require authentication, it doesn’t require some of the specific contracts that we have at some of the higher-level interactions. But you build that stack, and you look at it across all of these systems, and you think, "Okay, what is a generalized approach to enabling software to talk to software, and information to flow around ?"
We made a big deal about it, and now what you’re seeing is that strategy really coming into fruition in products. So clearly in Windows and in IE, we’re building an RSS platform in, we built on XML, SOAP, and other infrastructures. Since Windows XP, we’ve taken this approach that there ought to be a way that is, in terms of standards, very much based on XML, utilizing RSS, and we’re building these into the systems as core platform technologies that are going to enable that degree of communication at all levels of the stack, regardless of the application or the scenario, so it’ll be appropriate.
It all comes back to the emphasis on XML, as a model for information interchange. RSS is a notification system for XML. Building that into our platform, [with] SharePoint and the servers, is not by accident.
THG : I’m reminded of the evolution of AT&T...when it occurred to them that they weren’t in a device or a phone business but a network business. As you’ll recall, there wasn’t another network service provider in this country for a long period of time, because the network became a public utility. Nobody stood up, called it a good idea, and tried to do another nationwide public utility to compete with it. Do you see a "Bell System" of sorts for Microsoft’s future, for information connectivity, where you guys are making the connection, and then it becomes more profitable and more important to you to operate the system of information distribution than the operating system that plugs into the PC and reads bits from the hard drive ?
GS : I’m not sure that’s the analogy, though I think there’s a challenge there. It’s one thing to build the physical networking task, and then to have competition arise from the parallel development of other physical networks, which is a pretty significant barrier to entry. I think the way that the comparison doesn’t necessarily hold true is that, in our system, the barriers to entry are less significant than for that literal interpretation of it. I think at one level, though, it is true that, at our core, we’re a platform company. During the mid ’90s, [we had many] experiments going on, and you saw Microsoft make a lot of investments in businesses that we’ve since either divested of or scaled back or kind of rethought what business we’re really in. First and foremost, we’re a platform company. And in that respect, it’s a good analogy. We need to think about providing the utility in terms of the operating system, the tools, and the platforms to the various audiences that we serve.
THG : It occurred to me that Linux companies are not doing this...If a Linux spy were at the PDC today...he’s going to come to the conclusion, "We’re not putting it together. We’re not connecting the dots."
GS : Well, we do agree that that’s a differentiator for us...We used to have this thing long, long time ago that [Windows’] capabilities fly across all these different kinds of ranges of devices that Windows ran on, all the way down to PDAs, all the way up to clusters and servers. It’s the same approach for some of these tools as well. The scenarios are different, and the applications are different, but [the application we demoed at the keynote, with Netflix running on PCs, Media Centers, and cell phones], for example, shows that we can take a subset [of .NET] and scale it down to the smallest devices, and still get the developer efficiency and productivity, so it’s connecting in multiple dimensions. It’s investing in XML as a means of interchange. It’s building RSS as a notification mechanism into the system.
It’s about data living everywhere and being everywhere, and there needs to be this kind of substrate to connect [data] to each other in the appropriate way, and there needs to be the right kind of lightweight level of contract, or the transaction of that data, and whether it needs to be logged and secure and encrypted, or whether it can just be quick and dirty. We’re looking at all those dimensions, and building it into all the various components of the platform. So I think it’s a very astute observation to look at the whole picture.
THG : I’m going to do my best Don LaFontaine impression : "In a world..." where a user can query a corporate network and receive contextual information structures, based on SharePoint shared data, from RSS, and see those in virtual folders, and be able to aggregate the content of those folders...in that type of a best-case scenario for the future of Vista, going out into 2008, 2009, would that user, with that much power, even need Google ?
GS : Yes, because we don’t expect to index the Internet. That’s currently not something that is planned as a desktop operating system attribute. The MSN team has got a lot of people working very hard on building great search technologies, and they have done the same thing in terms of indexing the Internet, and so I don’t think the need for tools and servers and search engines that can aggregate across all of the many, many millions of Web pages out there, and provide relevant hints as to what you’re looking for. That’s a big problem that solves [itself most] appropriately in the cloud.
THG : But it seemed to me that there’s a possibility here for a lot of the work that is done in Google-style research, to actually take place through querying information that’s aggregated in XML format - and the new Word document format that’s coming up in Office 12 is not going to expose just content, but some of the meaning behind it. Theoretically, you could "P2P" that. You could have collective index content. You wouldn’t need a centralized index of the Internet, theoretically...Mr. President.
GS : You would have potential in such a P2P mesh queriable infrastructure, you’d have local content indexed. [But] I don’t want my local content being exposed as part of that, and turning up in [someone else’s] query results.
THG : But you’ve got the controls, the rights management is there.
GS : But if you scale that problem down and think of it in terms of an enterprise, then it becomes a much more feasible - near-term - and frankly, for businesses, much more interesting solution. If I can aggregate the peer indices and then present an interface on any client that would enable me to search my corporate enterprise for information to which I have associated access privileges, stored in whatever format...that would be a very powerful tool. That’s a step that we’re clearly making in the context of an enterprise. And then there’s an appropriate context between enterprises for the sharing of information, and workflow, and value chain creation based on those protocols and contracts. But I think we’re looking at that as a step, as a problem that we want to solve for enterprises and business-to-business contracts...because the federation of those indices requires trust contracts. There’s a lot of infrastructure there in terms of privacy, access. And it turns out that the problem is being pretty well solved today, in the cloud, by the client/server model rather than the peer model. But the peer model is more applicable for that local content that’s relevant to an enterprise, and relevant for enterprise-to-enterprise, enterprise-to-customer scenarios.
So I would think the thing that we do want to solve first is, let me go back to Microsoft and look for the answer to a question that I know somebody has great information, but isn’t necessarily put on a Web page and indexed, but lives on somebody’s hard drive, that maybe they would be willing to have people view into, because the rights management is associated with it, then I could use just a very simple interface with a shell and say, "Don’t just search my drive for this content, for this PowerPoint slide I can’t find, search everybody’s who might have it in the company." That’s a scenario that’s probably much more likely in the near term, and it relates to local content.