Personal notes from mainly applications area
the IETF meeting in Minneapolis, March 1999
By Professor Jacob Palme,
Stockholm University and KTH Technical University
|Here are my personal notes from the IETF meeting in Minneapolis, March 1999. Opinions in these notes are not always my own, but rather things I heard people say during the meeting. The selection of what to report is based on my personal interests, and are mostly from some of the meetings in the applications area.|
Statistics on Ongoing Working Groups:
This meeting in Minneapolis has 13 WG meetings and 7 BOFs in the applications area! (A BOF is a meeting on a topic which does not yet have a working group, but which might get a working group in the future.)
The area directors need help to reduce this number by closing down groups which should be ready. One problem is that the IETF secretariat does not allow a group to be shut down if the group has currently active documents. Thus, WG chairs should not permit documents to be published in the name of their group too liberally.
New groups created since last IETF meeting: IMPP (Instant Messaging and Presence Protocols) and WREC (Web Replication and Caching).
IMPP especially needs help from experts in the transport and security areas.
Short Introduction to BOFs in the APPs area:
APPLCORE: Standard to contain basic elements which can be reused in many new standards.
RESCAP: Finding capabilities, given an ID like an e-mail address.
IPP2IFAX: IPP=Internet Priting, IFAX=Internet fax, this BOF will discuss relations between these areas.
SIEVE: MTA filtering group. Will be moved to a proposed standard without starting any IETF working group first.
VPIM: Voice Profile Internet Mail, VPIM version 2 is to move to draft standard, VPIM version 3 is to be developed, to become more universal, bringing voice, fax and e-mail together in one presentation.
CNRP: Was called HFN, Human-Friendly Names, at a previous IETF meeting. Now it is named "common names", maybe an echo of X.500 common names.
DELTAV: The WEBDAV protocols need extension for remote versioning.
How to Correspond with the Area Directors
Advice to people corresponding with the area directors: Always include the file name of an IETF draft in the main part of a document, not only in an attachments. This helps the area directors in searching for information in their mailboxes.
XML and Corba
Then there was an intensive discussion about XML. The industry is in strong movement towards XML now. We do not know whether this will continue. But if it will, people wanted XML to co-work better with MIME in the area of packaging binary data (use Content-Type: Multipart/Related) and digital signatures.
I asked whether Corba was an alternative, and was told that Corba was an interoperability nightmare. If IETF does not like Corba, it should discuss how to handle objects, because the market needs tools for object communication. Other people said that Corba may be a good solution if all partners in a project are working in environments which have good Corba implementations, but that Corba-based protocols may have problems getting accepted as IETF standards, because there are not good Croba implementations on many important platforms.
XML started as a markup language, and is moving towards solving everything for everyone, something which might be dangerous, someone said.
People said that two groups are doing things, which may not be well compatible with IETF methods: Infrared devices groups and telephone over IP groups.
Intellectual Property Rights and Patents
There was a long discussion about intellectual property rights and patents. A person, who knows that his company may have intellectual property rights, must either withdraw from the IETF working group or inform the working group that his/her company may have intellectual property rights in the areas being discussed in this group. Some people may have problems saying even this, because of non-disclosure agreements.
The Dell case: Dell joined a consortium which had an explicit agreement, and violated that agreement. IETF does not have any such explicit agreement, and this may mean that the Dell case may not be applicable to IETF work.
Usenet Format (USEFOR) Group
This group is very active, but has problems. It is spending huge amounts of time discussing issues which will probably never get into any standard. The present chair wants to resign, and they need a strong chair with much IETF experience to get them onto the right track.
This meeting today has the task of defining the charter (not making solutions) for a core protocol, which can be used in the future as a basis for various different protocols, so that each new application need not start all the way at raw TCP/IP.
Goals: Learn from mistake, make coding simpler, allow re-usable code, keep it simple.
Non-goals: New transport layer, no grand OSI-style layered architecture, not something all future applications must use.
Discussing: Are different applications so similar that they can use a core protocol? Do we know that things will really be simpler with a new core protocol?
What is the problem, which we are trying to solve? I suggested an investigation should be done of "user needs", where the "users" are the future application protocols and their developers.
New protocols have been developed on top of HTTP and on top of LDAP. We should investigate how they did it, and what they needed from the core protocols, which they used.
Should we make a list of user needs now, or later during the work in the working group? Should this list restrict the tasks of the working group?
Hum voting: Restrict the set of items now, or at an early stage in the work of the group. No clear consensus. One possibility is to first have a working group to make this list, and then a new working group to develop the core protocol based on this list. A very long discussion followed on this issue. The discussion was interrupted by vote attempts, which failed to show consensus either way, and then new discussion.
One important issue is extensibility. Larry Masinter pointed out that HTTP has many extensibility mechanisms:
Conclusion: There is considerable interest in this work. But we should start by making an investigation of the problems and issues and needs first, and produce a historical document on what we have learnt from previous standards. After that, we may start a group to really develop the core protocol (or core framework, as some people thought it should be named).
The group chairman, Richard Shockey and Larry Masinter, has helped me improve the report on this session.
Internet has standards in development, for
It was said at the meeting that both these standards aim at transmitting non-alterable documents (but of course all documents are to some extent alterable). But IFAX-EIFAX are based on store-and-forward transmission of the fax through e-mail. It is not, thus, able to satisfy an important requirement, according to market surveys, from fax users: They want, just like for ordinary phone-based fax, an immediate confirmation of delivery. Delivery Status Notifications can be repuidated, and are not available immediately.
The task for this work is not to replace IFAX-EIFAX, since store-and-forward fax handling is also useful. The task is to develop a profile (with possible extensions) to IPP, for fax usage, and based on direct connections and immediate confirmations.
Problems with existing protocols:
IPP 1.1 adresses these issues.
Discussion: Intranet or Internet service. Chairman says it is a global service. IPP can be made a global scheme. It has its own URL scheme, IPR:// and its own port number, 631.
Designed well and enhanced and augmented, IPP could be a total end-to-end replacement for GSTN Fax.
Differences between printers and fax machines: Fax machines acknowledge messages and time-stamps them. (There are other differences, not mentioned at the meeting.)
Basics of document transmission (both Ifax and IPP): Creation, Adressing (URLs), Negotiation, Transmission (using HTTP Post), Delivery Receipt, Security (using HTTP Digests and TLS).
ITU has a treatment standard, but is use will probably be in LAN or carrier gateways, not as an end to end user protocol.
What needs to be done to IPP to suit FAX needs?: Time/Date log, Sender identity exchange, Gateway from printer to fax services, Digital signatures, Cover pages.
Goals and deliverables
Can we extend RFC 2542: Terminology and Goals for Internet Fax. Perhaps best as a delta to the existing document?
An alternative to IPP might be session-mode e-mail.
The main difference between what we are discussing and IFAX is confirmation of delivery at the time of transmission, which IPP may be extended to provide and which e-mail, which IFAX is using, cannot provide.
IPP will also allow higher quality output, since client and server can negotiate qualities on the wire. TLS gives IPP possibillity for better security. Identity can be spoofed in email but is sometimes a legal requirement on fax.
T.31 does not have an addressing schema like the use of URLs in IPP. Co-ordination with T.38 8 (ITU standard for tunnelling using voice-over-IP) is needed to avoid duplication of standards. T.37 is an ITU standard referencing IFAX.
About 40 people were present, about 6 of these were willing to do work in this area. A large majority thought this was an area worth pursuing.
Very important: Co-ordinate with ITU work, avoid duplication of T.37 and T.38.
An EMA Work Group. Mailing list: firstname.lastname@example.org. To subscribe, send mail to email@example.com with "subscribe VPIM-L" as the subject. This group has its own meetings also between IETF meetings. (Important with announcements and minutes from off-line meetings, someone said.)
VPIM version 2 is already published as RFC2421. This meeting was about extending it to version 3. But work will also continue on progressing VPAM version 2 to draft standard.
Interworking between multi-endor voice messaging systems
Support of voice and fax attachments
Interworking between voice messaging and the Internet and the Desktop.
Desktop interworking (using personal computers instead of telephones as user interfaces) requires support for the CODECs which are commonly available on various desktop platforms. CODECs for which source code is available would be an advantage. Streaming is wanted, text body parts are wanted for critical content, a way of handling clients which cannot do audio is wanted. Messages should be storeable in databases, it should be possible to retransmit and forward messages and to create mail digests.
Discussion: Should the standard be extended to support more CODECs, or should software for the various personal computer platforms be written to support the existing CODECs in VPIMv2? Must every client support lots of different CODECs? Can this be solved with capability negotiation (using the same technique as for fax)?
VPIMv2 (IETF proposed standard)
VPIMv2 uses multipart/voice-message MIME type, and uses ESMTP with extensions for efficient and reliable transport, and allows G.726 and TIFF-F encoded voice and vax, as well as VCARD.
All major vendors have committed to support it. Demonstrations are planned for summer and fall of 1999.
Unified Messaging ConceptVPIM version 3 will combine combining of all different kinds of content in a unified messaging concept.
Interworking with version 2
VPIMv2 address uses phone numbers in the left-hand part of e-mail addresses, and look like firstname.lastname@example.org or email@example.com if a telephone extension is wanted. This is based on RFC 2303.
VPIMv3 will allowe-mail addresses like:
Delivery of primary part will mean a positive delivery, even if other body parts cannot be rendered. This might be named multipart/optional, it is different than other e-mail standards, someone said.
This is a continuation of the session on human-friendly names at the previous IETF meeting. We are not, any more, aiming at globally unique identifiers. My notes from that session gives a fuller description of the problem area.
Mailing lists: firstname.lastname@example.org
People want to type in common names instead of URLs in the location field in web browsers. Our task is to develop a protocol for communication between browsers and directories in resolving such names. In particular, internationalization and character set issues will be studied. Note that février in European French is uppercased to FÉVRIER but in Canadian French to FEVRIER. So case-insensitive matching is not always easy!
Dun & Bradstreet has a name-resolution service. You can include geographical area in the request to limit the matches.
RFC 2345 and RFC 2517 report experience of value for this work.
draft-moseley-ohfn-00.txt: An Architecture for Open Human Friendly Naming of Internet Resource Addresses.
The group will produce two documents, a requirements document and a proposed standard. There was some discussion on whether a requirements document is needed. There are two different personal requirements documents, and that is why they want to unify these into agreed requirements.
Keith Moore: The word "requirements" may not occur in the title of an informational RFC. Use words like "goal" or "scenarios" instead.
Pseudonyms, such as Mark Twain, should be mapped to the real name, in this case Samuel Clemens.
"Cambridge" can map to a city in England or a city in Massachusetts or to a company.
Question: Do you allow return of a list of candidates, or should the service always produce a one-to-one mapping from name to URL. Registration of unique names is not part of the charter of this group, neither legal "ownership" of names.
Security issues: Will this service have authentication or other security facilities. Will the authentication be good enough to support commercial applications.
Anonymous callers will be supported.
Compatibility with cost recovery systems is needed.
Should spoken input and speech recognition be supported? Technically, speech recognition today only works with a limited vocabulary.
Do we want a simple solution, or an advanced data base search facility. Should users be able to input various restrictions like "I am searching in Massachusetts only" to get single answers. This can become very complex and advanced.
What should be done with resources, which are replicated in many places? How to avoid returning 17 matches to the same resource, as replicated in 17 different places?
Uniqueness is a result of the policies and management of a server. Some servers have as a business model to ensure unique matches, i.e. only registering one URL for each name. This will be easier to provide with servers restricted to certain classes of names.
Question: What is the difference between this and a search engine, which for the word "apple" will return pages about both the computer manufacturer and the fruit. Answer: A common name resolver is limited to some particular area, such as trade names.
We cannot dictate the policies of the lookup services, but we can define which kind of services our protocol will be aimed at.
In scope: How do you discover registry services?
"How, when, why would you use this service" should be specified.
There wer about 70-100 people present. About 15 people indicated they were willing to do work in this issue. 4-5 people were willing to work on only company name lookup.
| About 40-50 people were present.
This group plans to develop extensions to HTTP for management of web sites. This will include versioning, parallel development followed by merging, and configuration management of web sites. It is based on the earlier work in the WebDAV group about distributed authoring. (Proposed charter).
WebDAV has support for locking and uploading of revised documents, but does not yet have versioning.
By Versioning is meant tracking multiple revisions, following the revision history backwards, using a checkin/checkout mechanism for managing update conflicts, but more powerful than in WebDAV at present. Properties are parts of resources, and can be revised along with the resource. Properties can be mutable and non-mutable, by mutable meant whether you can change it, without producing a new document with a new ID. The revision label of a document is mutable, but the ID of a document is not mutable.
Configuration mangement is the problems of handling revisions of structures of interlinked web pages, where a change in one page may influence other pages.
The standard should allow interworking in reasonable ways with ordinary HTTP clients without support for the new facilities.
Server-server configuration management is not included, only management of a configuration handled by one single server. This may be added in future development, but not in the present group.
Variants is not included, this may be future work in the next group.
Authoring against caches is not included, all operations should be done against the origin server for the managed resources.
Management of change proposals and workflow is not part of the scope of the suggested new group.
Question: Can you edit dynamially generated pages? Answer: Probably not, but you might be able to edit the script which generates this page.
The protocol will include multiple levels of support, and way of finding out which levels are supported by a particular server.
Activity = set of related changes to several documents, for example linked by hyperlinks so that they must be changed at the same time. Other terms for not quite the same thing: Change set or change package.
An activity helps you know what needs to be merged of parallel changes.
Discussion: Can I check out resource A and B at the same time, without the risk of someone else checking out B at the same time as I check out A.
Configuration = set of old revisions of various resources, so that I can get back and recreate this configuration again.
Revision = A specific instance of the history of an item. Every revision has its own canonical URL. Each revision has a unique name.
Workspace = A configurable view of the versioned resources. You cannot version a workspace, but you can create a snapshot of it.
Relative and absolute URLs should continue to work.
Question: If Mary checks out a configuration, and deletes object B from it, and if John does the same thing, do they then have equivalent configurations? Answer: The configuration still contains B.
Question: Can a configuration reference other configurations? Answer: Probably yes. Might not be allowed in the first version of the standard.
The background to this working group is the wishes to give priority to certain packets (belonging to applications like streaming video). It is a protocol for servers to specify policies for such prioritizing. Earlier attempts have failed because they have been too complex, so now they claim they are doing something simpler.
They are using an LDAP/X.500-based structure, because this structure can handle a hierarchy of domains and subdomains, with policy inheritance from superdomains to subdomains, as shown by the example below:
+-----ou--------+ | | Some policy entries +---administrativeDomain----+ | | other policy entries computer---+ | +------service----+ | | more policy entries
Policy entries are stored in policy containers, which can be associated with objects in different places in an administrative structure.
Policy rules consist of boolean conditions called policy conditions. It is defined so that subclasses can add to the policy conditions of superclasses. An important goal of the design is to reduce the number of downloads by reusing the same rules several times.
There was a long discussion of whether to allow simple policy conditions. If I understood it rightly, there is an ordering problem if a superclass has complex policy conditions with multiple, ordered actions, and an inner class has a simple policy condition with a single policy rule.
A transition is an event, a pulse, which moves from one state to another state. Examples of transitions:
A state is something which stays stable for a time period. Examples of states:
All kinds of Boolean conditions are not allowed on all kinds of entries. AND between two transitions is meaningless, since transitions occur in zero time, and so two transitions can never be assumed to occur at the same time. For example, "IF it is 4 p.m. AND 5 p.m." is meaningless but "IF it is 4 p.m. OR 5 p.m." is valid.
This group is working on defining terminology and framework for agreeing on media types, not only to agree on a certain media type, like "GIF image" but also agree on its features, such as resolution.
Example of use: You have a low band-width connection to a resource provider, but that provider has a high-band-width connection to the Internet.
Two RFCs have been published recently: BCP 31/RFC 2506 on Registration procedures and RFC 2533 on Syntax.
(& (type="image/tiff") (color=binary) (image-file-structure=TIFF-S) (dpi=200)(dpi-xyratio=[200/100,200/200]) (paper-size=A4) (image-coding=MH) (MRC-mode=0) (ua-media=stationery) ) (| (& (type=(["text/plain","text/html"]) (color=limited) (paper-size=A4) ) (& (type=["image/gif","image/jpeg"]) (color=mapped) (pix-x<=800, pix-y<=600) ) )
There was a discussion about the feature tag type as shown in the example above. MIME type parameters are not allowed, their information should be specified by separate feature tags?
There is no font feature tag, because of problems with fonts owned by various manufacturers. There is a need for work on this. Larry Masinter pointed out that profiling of character sets and subsets might be needed, but that this would be so difficult that we had better not handle this.
A new MIME header is proposed, called Content-Features to indicate the new features.
There was a long discussion about the MIME rules that if you send multipart/alternative, you should send the simplest variant first and the most advanced, and best, variant last. This rule is difficult to handle when you have a number of variants with different Content-Features. Is a color picture with 72 dpi more advanced than a gray-scale picture with 300 dpi?
The group is going to simplify reference to feature sets, by giving names to feature sets, so that you can refer to them by name instead of the full text of thefeature sets. Two different variants of names for features were discussed. One was a crypthographic hash value of the data, the other an URI. Crypthograhphic hash has the advantage that you do not risk to get a stale cached value, as you can when you use an URI to access something. This could also optimize communication, by not having to transfer the whole feature set every time you need to refer to it. Why is this needed? Because they want to register very complex feature sets, for example "all the formats which a standard Windows-98 installation can handle".
The hash is a way of uniquely identifying a feature set, without need for any kind of registry! This has also a disadvantage, a registry is combined with a lookup mechanism for the description, a hash value does not directly refer to any lookup system.
Suppose you have installed Windows 98, but have deinstalled Explorer 4 and instead installed Netscape 4. Can you then send the feature name of Windows 98, plus some added conditions to say in what way your installation differs.
Back to table of contents @ Other meeting notes