Personal notes from some
applications area sessions during
the IETF meeting in
San Diego, December 2000
By Professor Jacob
Stockholm University and KTH Technical University
|Here are my personal notes from the IETF meeting in San Diego, December 2000. Opinions and statements in these notes are not mainly my own, but rather quotes of things I heard people say during the meeting, even when I do not use quote symbols or specify the name of the person being quoted. The selection of what to report is based on my personal interests from some of the meetings, mainly in the applications area. In some cases, what I quote below are statements, which sounded interesting to me, but not always statements I understood fully.|
The total cost of this meeting for all participants is approximately:
It would not surprise me if the total cost is more than one million dollars. It is then surprising that so much time is spent on trivial issues like "can we be ready with the charter on January 5th?" compared to the time spent talking about the serious technical issues which we were gathered to solve. But perhaps technical issues are easier to discuss on the mailing lists, and this is the kinds of things which need face-to-face meetings to decide on?
New for this meeting was that PC projectors are replacing overhead slides, and problems with switching PC projectors between different laptops occurred many times. There is a need for a wireless protocol for communication between laptops and PC projectors, which easily allows the switching of the control of the PC projector between laptops as needed during the meeting.
Chairs: Area directors, Patrick Fältström and Ned Freed.
LDAP is moving towards draft standard. (A draft standard is usually a well established standard with lots of implementation experience behind it.)
A general problem is that new RFCs often do not have correct and working references. All RFC writers are asked to minimize the number of references and ensure that those you include really do work.
Reliable document access and content distribution is an area where more work is needed.
This is the area for systems like ICQ, which allow people to know when their friends are on line, and send immediate messages to such friends. For notes on previous work in this area, see http://dsv.su.se/jpalme/ietf/ietf-aug-00-notes.html#IMPP.
This was very confusing, because there were several different meetings on different topics within this area. Some of them were BOFs, on making some simple protocol directly on top of TCP, other were established working groups on more advanced solutions, and one BOF on IMXP, instant messaging on top of the BEEP protocol.
But this is not necessarily wrong. Standards work can be seen as a market, where many products are provided, and some succeed on the market, some do not succeed. In this way, we might be getting better standards than if only a single standard for a particular application was allowed.RFC 2779: Requirements, RFC 2778: Models and terminology, have already been produced.
Area directors asked for different proposals, nine different were proposed. The nine proposals were split into three groups. Attempts to define common pieces of the different protocol proposals. The three present camps are:
PRIM proposal BOF
PRIM aims at a minimalist approach. No features except those required by RFC 2779. It is a merge of four different proposals of a similar kind. draft-massoldi-prim-impp-00 is the first IETF draft.
PRIM aligns well with CPIM, but some CPIM issues are still unresolved.
Advantages with PRIM compared to other proposals: Built from the ground up to satisfy CPIM requirements, well understood architecture and implementation, scalability, good performance, similar to existing protocols allows simple migration.
Instant Messaging and Presence Working Group
On Friday, finally, the instant messaging and presence working group met. This meeting was thus with an established IETF working group, and not, as those reported earlier, a BOF meeting.
This group started out defining protocols for Instant Messaging and Presence. But now, it seems impossible to get one standard, and the group is instead now aiming at developing protocols for interaction between different instant messaging and presence networks.
This multi-protocol format creates problems for security. Security is based on digital signatures, and digital signatures requires that the signed object is unchanged. Thus, if digital signatures are to work in multi-protocol environment, they have to have a common message format. (You might also use digital signatures within separate networks, if you trust the gateways which transform messages between the networks.)
There is agreement between 7 of the 9 subgroups that this format should be based on MIME.
Those who do not like MIME, would instead prefer an XML-based format. They will not use complete RFC822, because it has too many complexities. There will (as different from RFC822) be no line-wrapping, all headers must fit in one line.
A CPIM message cannot be changed in any way. Not change or reorder headers, not even change case. But you can rewrap it in a new envelope.
They will define required headers, optional headers, and predefined headers (not required, but named for future usage).
Dates will probably be in ISO format instead of RFC822 format!
They have Message-ID and Content-Type just like in ordinary e-mail.
Transport protocols do not have to use this canonical format, but they should carry enough information to be able to re-create the canonical format.
Discussion: Full MIME or simplified MIME format. No agreement reached.
Result of straw votes:
What is the minimum set of functions in the standardized header? "From" agreed (almost all), "To" agreed, "Date" divided, but most want a date. RFC822 format was preferred over XML format by most people present at least for messaging, possible also for presence information, but not yet quite agreed.
NNTP is a protocol used for news clients (newsreaders) to communicate with news servers. NNTP is also one of several protocols used for news servers to communicate with each other.
The NNTP Extensions WG has the task of revising the NNTP protocols and also to add some new functionality as well as an extension mechanism for future extensions.
The group has been working for several years and is mostly finished. Only twelve people came to this meeting.
The group wants to modify RFC 977 (NNTP) to make it easier to add extensions to it, and to avoid different people adding the same extension in different ways to NNTP.
Many people want authentication, but because it is complex, they want to get a document ready without authentication first, and then add authentication.
Current practices draft is published as RFC 2980.
RFC977bis is in 12th release within the current round. Nothing important has happened since the last IETF meeting.
The draft includes support for internationalization (UTF-8), a standard extension method, clarifies ambiguities, some minor nits left to pick.
Willdmat is back, but specification causes confusion. Problems is on whether to allow exclamation point. You can add it to the beginning of wildmap to negate it. Latest draft clarifies this.
PAT (I believe it is a command to selectively retrieve headers based on pattern matching) is the only command which uses wildmap for matching against non-newsgroup names. Some people want to solve the problem by dropping PAT from the base document and make it into an extension.
There is some issue of time specification, whether to use GMT or zoned times, but I did not quite understand what the issue was. GMT and UTC does not mean the same thing, some people claim, but is this of any importance for NNTP?
Bad precision in times does not matter if you are using the same server, but may cause problems if you use multiple servers.
MODE READER is a facility which it is not clear if you need or not. Could be done automatic. Certain popular servers want to have this, so it is in the draft.
Streaming should not be allowed across mode changes. The standard should specify what is a mode change, and every extension must specify if it is a mode change or not.
There were several different meetings on content distribution. As I understand, the issue is that people want to download more and more data from the Internet, and to reduce the load on the network new solutions are needed for providing the information from the most local server. This is an extension to the caching and proxy services already used. The extensions include agreements between content delivery subnetwork to provide information to each other, and create mega-content-delivery networks. Content distribution is meant to be controlled by publisher-distributor instructions.
While the previous meeting I visited, the NNTP group, were just eleven people in a room with seats for 80 people, the meeting on "Open Proxy Extended Services Architecture" had 100 people trying to get into a room with seats for only 50 people.
Open Proxy Extended Services Architecture
They wanted to be able to provide new services connected to caching proxies. These services would give more efficient caching, content administration and network control. Examples:
Content Distribution Internetworking
Many content distribution networks (CDNs) are existing or planned. The idea is that different such networks could be interconnected to each other, to help each other provide information. This is called "content peering" or "CDN peering". For this is needed a direction system, a distribution system and an accounting system. There are 9 Internet drafts on this work.
Contextualization of ResolutionThis work is on improving URN resolution facilities, to allow you to find an appropriate local copy of a resource. Example: Get a magazine article from the local library instead of buying it from the publisher.
The DNS only allows letters a-z, numbers and hyphens in domain name parts. Upper-case characters A-Z are allowed, but are equivalent to the corresponding lower case characters. The total length of each domain part must not be longer than 63 characters.
Some existing software will break if DNS names with other characters are handled. However, many people and organizations in countries speaking other languages than English have names with other characters and character sets, and it is natural that they want to be able to use such characters in their domain names. There is a strong user push towards allowing other characters in domain names.
There are two primary methods of doing this. One is to change the DNS to accept universal character sets like UTF-8. The other is to map universal character domain names on the allowed restricted character set.
The advantage with changing the DNS is that many existing software products will show non-English domain names correcty. The disadvantage is that some existing DNS systems will break.
The advantage with mapping from UTF-8 to the presently allowed DNS characters is that no existing infrastructure should break (since only allowed characters are actually seen by the DNS). The disadvantage is that those who have older software will see completely ununderstandable domain names. Most proposed mapping schemes are based on base32 (base64 cannot be used, since upper and lower case characters must be equivalent) and it produces just as humanly unreadble strings as base64.
Some people believe that encoding things into the allowed DNS characters is temporary, and that a long-term solution is to change the DNS to allow more characters. Other people see the translation method to allowed DNS characters as the only reasonable solution also for a long time into the future.
The majority opinion is to do such mapping, because people are afraid of breaking existing DNS infrastructure by introducing new characters into the DNS. I am not sure if this is going to succeed. The market pressure to be able to see domain names in user-friendly format with existing software may be so strong that the market will enforce another solution than what the IETF experts want.
A few people want to combine the two methods by using mapping for a short-time solution but aim at directly sending full international names later. However, this does not handle the transition period in a user-friendly fashion.
Many different proposals for doing the mapping from UTF-8 to DSN character strings in the working group on Internationalized Domain Names (IDN).
To succeed with this, many problems have to be solved. Should the international domain name (before conversion to allowed format) also be case-insensitive? How can we avoid more than one legal encoding of the same domain name? How can we allow a maximum length of domain names? Are there some characters which should be forbidden, also in internationalized domain names? For example, characters which look similar or almost similar can cause problems, such as hyphen/soft-hyphen/en-dash, etc. This has been a controversial issue, and the number of prohibited characters is much smaller in the 01 draft than the 00 draft of the nameprep document.
All the different methods for converting arbitrary unicode/10646/UTF-8 strings to the allowed DNS format are in this working group called "ACE" (short for ASCII Compatible Encoding). There are various ACE-based proposals discussed, such as RACE, LACE, TRACE, etc.
The final decision, according to a straw poll at the meeting, seems to indicate that the working group prefers a solution which does not in any way modify the existing DNS infrastructure, which probably means some kind of ACE solution.
The steps of transferring a unicode domain name to a DNS-allowed domain name has several steps: (1) Map, (2) Normalize, (3) Prohibit, (4) Compress and (5) base32 encode. This order gives much less prohibited characters than some order orders.
The new nameprep draft says that similarly looking characters will be allowed, but will be mapped to a single encoded character. This procedure is now called "mapping" and not "case folding". This is much better, because we allow more existing names, and avoid the problem of similarly looking characters being mixed up. This should only be done for similarly looking characters which have a similar semantic meaning. So, for example, the letter O and the digit 0 would not be mapped to a single mapped character, even though they are difficult to distinguish.
Final decisions have not been made of whether to use mapping or prohibition to stop problem characters. More input is needed on this.
Possibly different (more strict) rules could be used when registering domain names and other (more liberal) rules when looking up domain names.
Should there be a way of handling versions and future extensions of IDN? Unicode/ISO-10646 can be expected to change in the future, and this might require changes also in IDN. A problem with versioning is that a world which combines different versions will be very complex. Will all domain names disappear when you switch to a new version? Do you have to perform lookup once for each version in the future, when looking up a domain name?
A personal comment: All discussion is about the technical properties of the different methods, there is no discussion of whether there are human-interface issues which would favor one solution. For example, do the encoded strings look more understandable to humans with one encoding method than with another method? All methods seem to produce completely human-incomprehensible encoded strings. The encoded strings will be visible to humans in all software which has not been adapted to the new standard, so this might be an issue to discuss.
A BOF to investigate the need for a working group to develop a standard for registry-registrar protocols.
With multiple registrars and multiple registries, there is a need for a common protocol for their communication.
A requirements document already exists.
There are a number of incompatible proposals, but no agreement and not yet many implementations. Some are based on a single large data base, other on a distributed model. The existing requirements document tries to define a minimalistic approach, other people want a wider scope, a registry which can be used for other kinds of registrations than domain names. Perhaps the protocol could be used for other kinds of registration objects (port numbers, content-types, etc., etc.) The present system uses nic handles, but perhaps another solution would not need to use nic handles.
Statements during the discussion by various people:
We were very crowded, 100 people in a room seating 50 people, people standing outside of the door trying to listen to what was said. In the middle of the meeting, some person said that California fire regulation rules forbade people sitting in the aisles, and we very efficiently rearranged ourselves with people standing along the walls instead.
There was a lot of fun and laughter in this meeting. Perhaps the crowded athmosphere caused this? For example, someone put up a slide with rather small characters, describing XML schema specs for requests and responses to a registrar. Someone at the back of the room complained "I cannot read". Someone else said "That's because it is XML." Someone else said "Would you have preferred ASN.1?". Storms of laughter followed!
Whois and LDAP might be used, and might be merged, if I correctly understood what one speaker was saying. LDAP is perhaps not suitable for registration, but there exists open source software for LDAP so that you can test your ideas in it, one speaker said.
Internet Message Access Protocol is an advanced and complex protocol for connection between an e-mail user agent and a server, mainly for downloading mail and managing a mailbox store on the server.
This working group has been active for quite a long time, and is nearing the finish of its work. This meeting was primarily working on the following extensions to IMAP:
A general impression which I got from the discussions is that IMAP is a very large protocol, but that servers can refuse to perform many of the IMAP commands. This means that a client may have to be adjusted to a particular server (or the reverse). And then, the value of having a standard at all might be discussed - the idea with a standard is that any client should work well against any server. A client which relies heavily on function A, B and C will not work very well with a server which provides functions C, D and E but not A and B.
This may be because I did not understand the issues, I am no IMAP expert, that is just an impression I got from the discussions at the meeting.
The IESG very strongly wants to avoid this situation getting even worse, by forbidding any new extensions which do something you already can do in a new way.
Because of this, a separate VIEW command VIEW capability.
Access Control ListsA smaller design team is designated to look at remaining issues on this topic.
Retrieval in Binary Format
Binary will be extended to allow binary appends. The server can reject binary appends if it cannot handle it. The FETCH command will be extended. You will be able to retrieve objects in binary format, even though they are stored on the server in Quoted-Printable format.
Channels are (as different from Binary) not well suited for IMAP and should perhaps be move to the BPIM working groups. It is currently a joint issue in both the IMAPEXT and the BPIM list.
When a draft arrives on channels, it should be published as an individual submission and not as a working group draft (so that the working group is not forced to handle the draft if it does not want to.)
Annotations & Regular Expressions
Store command is extended, MODIFIED is added as an attribute.
List Command Extensions
Multiple mailbox patterns will be allowed.
Lists of selection attributes will be allowed.
Mailbox names will use URLs.
Examples of selection attributes: depth (n) to control hierarchy level for traversal, referrals included in the list.
Examples of response attributes: children, subscribed, unseen, recent, messages, uidnext, uidvalidity.
Mailbox names are going to switch to URL format. URLs are going to be used. Issue: Problem of maximum length of URLs. IMAP has no length limit, but other standards using URLs have a length limit of 1000 characters.
"modtime" = select mailboxes which have been changed since a certain time.
Viewing and SortingSome people said an ability to perform a sort, and then ask for getting back for example the fifty first messages from the sorted order, would be very useful to many user agent designers.