- General remarks
- Introductory speech to all participants
- WG requirements
- Architectural consistency
- Restrictions on document titles
- Audience of RFCs
- For new protocols:.
- Intellectual property rights (IPR)
- Mailing lists versus face-to-face meetings
- Security considerations
- Well-known constants
- Network management issues
- Internet policy protocol
- HTTP working group
- IETF-DRUMS new version of e-mail standards
- URL BOF
- Calendar and Scheduling Workgroup - CalSch
- WEBDAV WWW Distributed Authoring & Versioning on the WWW Working Group BOF
- Uniform Resource Names
- Internet printing application
This was the largest ever IETF meeting, more than 1600 participants. There was a discussion on the IETF list how to make effective progress with so many participants. The man who opened the IETF meeting said that of those who came to a subgroup, those who were really active in the subgroup should sit up front, and other visitors should give way.
I noted that several friends whom I had earlier met at ISO and ITU standards meetings were present at the meeting, for example Bengt Acksell and Carl-Uno Manros. I think it is valuable that people with ISO/ITU background participate in IETF work. They have a lot of experience which can be of value to IETF.
A very few participants had portable computers with cellular phone connections, to give them on-line connections in the middle of the meetings.
These notes were mostly written during the ongoing meetings, with a portable computer noting down what people said. This mean that the text in the notes may sometimes not be as coherent as in a carefully prepared exposition on an issue. Opinions below are not necessarily my own, often I have just written down what someone said. I have not checked the correctness of what I write below with the people responsible for the various standards groups.
Several of the sessions discussed applications where a good directory system might be useful, but Harald T. Alvestrand warned every time, saying that directory systems is a *rathole*. I guess he meant that little success has been made in this area despite many years of work, and that if you want to define successful standards you should use solutions which do not depend on directory system support.
A general impression of the U.S.A. is how identical the ideas and thinking here and in Europe is on social and political issues. The "dust bin" on the computer desktop is for example being renamed "recycling bin" (which really is much more adequate, since disk space freed can be reused for other files when you empty the "recycling bin".)
If you have proposals for nominations to the IETF board, write e-mail to:
Network management and operations are jointed into one area.
Requirements for Internet Standards:
IESG will review all documents for RFC publications, not only standards, in the future. There must be a high technical quality (see RFC 2026).
We all share responsibility for architectural consistency. Do not specify multiple ways to do the same thing. Avoid conflicting objectives.
RFCs should be clear as to who their intended audience is:
i18n: Internationalization; Internet is not U.S. only, not English-only, MAIL TO is not POST TIL.
Note: UTF-8 is an encoding scheme for ISO 10646, such that US-ASCII characters are encoded identical as for ordinary US-ASCII.
Options which no one implements must be removed from a standard. Working group chair must document implementation experience, including which options are really implemented.
Important: the basic IETF work takes place on mailing lists, not at face-to-face meetings. These mailing lists must be open. Some people might disrupt discussion, and it might in rare cases be necessary to prohibit someone from participation. But this should be very rare and need approval from the area directors.
Security considerations must be real. The phrase "Not considered" is not acceptable in the security considerations section. Discuss security assumptions behind protocol changes, security impacts, environment where the protocol is to run, list known threats. Example: When to allow clear text passwords.
IANA can register constants & tags
If this is needed, an "IANA considerations" section in the RFC is needed
Draft standards must include network management issues. If management is not needed, this must be explained. Current standards management protocol i SNMPv2, you must ensure you are compatibility with this.
Scalability: Success is a problem. State limitations on scale of use. Discuss: "can it scale to tens of millions of users"? Include all limits (round trip, computing resources, symmetry, etc.).
Applicability: What is it for, how does it relate to existing protocols, arena of use (LAN, AS, Internet).
Network stability: What happens if the network goes down? Note any particular problems. Robustness. Topology problems.
I happened to get into a meeting in the transports area (otherwise all my reports below are from the application s area). The title was "Internet policy protocol" and it seemed to be on how to manage restrictions and discriminations which some internet service providers put up. By policy is meant administrative restrictions, pricing and accounting strategies.
Pipelining can make HTTP many times more efficient.
If a client sends a HTTP 1.0 request, can the server respond with HTTP/1.1?
Principle: All headers in 1.1 must be ignorable by 1.0 systems. If not, you must switch to a new major number, i.e. 2.0 and not 1.1. Some features were rejected for HTTP 1.1 because of this rule. The name of the rule is "minor/major version number rule." and should apply to all IETF standards work.
"And of course never claim 1.0 and send 1.1 additional headers."
"You cannot always send 1.0, and request 1.0 in response, because then there will never be any way of progressing to 1.1."
"Extra headers should not break clients."
Should HTTP adopt "Content-Disposition" to indicate the file name? Security risk: Sending a file to replace your password file.
There is a problem with proxies caching of warning headers, I did not quite catch what the problem was.
HIT metering to IESG. Warning for a bug on proxy authentication of cache revalidation and "age" application 1.0/1.1.
Plan to generate s general ways on how to develop your own protocol based on HTTP.
Error codes should be registered with IANA.
The task of this group is to produce a new version of the basic e-mail standards (SMTP=RFC 821 and RFC 822, not MIME). The group should not invent new features, just document existing practice in more clear ways.
draft-ietf-drums-abnf-01.txt = Syntax format document
draft-ietf-drums-smptupd-03.txt = new SMTP document
draft-ietf-drums-msg-fmt-00.txt = new RFC822 document
This is a very technically competent group, and much of its discussions tends to be on very special technical issues, which sometimes raise a lot of controversy but may seem not very important to outsiders.
One example of this is the discussion of the problem with the mandatory domain parameter of EHLO. This does not really make faking e-mail more difficult. The purpose is mainly to help logging. The problem is that the client may not know its domain; should we force them to look up their domain? IP address might be used instead, but this may give problems for machines which get their IP addresses dynamically from the net. Also some clients may have problem with IPv6 IP addresses. So perhaps allow, in the EHLO command, any name which the client knows for itself.
There is also a problem with firewalls, which sometimes may change the internal IP-addresses to different internal IP-addresses; this should preferably be handled in a separate document about proper firewall behavior, which someone said he was working on.
Dave Crocker said: "We have a traditions in the e-mail area of requiring conformance (for receipt) of just about anything anyone does."
Loop detection by examining "Received" headers is maybe effective, disagreement. Stopping loops by an upper limit to the number of Received lines is not nice, but perhaps a very high limit (say 100) may be acceptable as an additional loop detection aid.. Some MTAs remove received lines, for example firewalls which want to hide internal net structures, so use of received lines only for loop detection may not be very effective.
Many people wanted a requirement that all ambiguous constructs must be clarified by parenthesizes, but I did not quite get if this was agreed upon.
Discussion about how to specify that strings are case-insensitive in ABNF.
Ambiguity of decimal constants: Numeric literals are one number, decimal is wanted, octal and hexadecimal maybe desirable, but if so how to specify in an unambiguous way that a constant is octal or hexadecimal. Some discussion of appending %<something> at the end of a number which is not decimal. Discussion of whether %T or %D is best to specify decimal notation.
A lot of discussion about whether to use "=" or "::=" for the designation operator. For "=": This is what is mostly used in existing documents. For "::=": This makes the new ABNF clearly different from the old, which can be valuable since there are some differences in other cases. A straw vote got about equal number of votes for either alternative, but almost all agree we should standardize on one alternative only. History seems to win, so "=" will probably be specified.
Not for discussion: Date semantics, minutes with more than 60 seconds in them.
Discussion about where comments are allowed. Is it legal to write "To: (this is a comment) name <local@domain>"???
Reply-To: Where the originator/author suggests that replies are to be sent. Some mailing-list expanders add this field, but we could not agree on whether this should be allowed or not. And what does "Resent-Reply-To" mean? This is a very controversial issue, because the present practice is vague, and people want to preserve that vagueness, but to what extent, especially since vagueness in Internet standards is in general disapproved? This is important, not a technical detail!
Resent is trace/logging info, not to be used by an MUA.
Deprecate Resent-Reply-To and Resent-Subject. Allow the rest.
Resent is for manual resending only (some systems use it for automatic resending, but this should not be recommended).
Resent-fields from a resending-event should be placed together in the message heading. Later ones should occur earlier in the message says some people, but others have the opposite opinion.
Describe the difference between resend and forward.
Discussion about Sender. Chairman proposes that "Sender" should refer to a deliverable mailbox. Non-deliverable trace information should go into a new header field "Originator-Info" instead.
Proposal for a variant of SMTP to be used for message submission, since the present practice of using the same SMTP for both submission (from MUA to MTA) and transferring (between MTAs) has some disadvantages.
The group I am editor in, responsible for sending HTML in e-mail, more or less finished its work at this meeting. The proposed standard is expected to be approved by the IESG shortly. This proposed standard will consist of three parts:
There is also an informational RFC planned and available as an IETF draft (the INFO document). This document will not be part of the standard and will be published later.
The latest versions of all documents from this working group can be located via URL
The main idea of our proposed standard is that you send a HTML document, together with in-line graphics, applets, etc., and also other linked documents if you so wish, in a MIME multipart/related body part. Links in the HTML to other included parts can be provided by CID (Content-ID) URLs or by any other kind of URI, and the linked body part is identified in its heading by either a Content-ID (linked to by CID URLs) or a Content-Location (linked to by any other kind of URL. (In fact, the "Content-ID: foo@bar header" can be seen as a special case of the "Content-Location: CID: foo@bar header".)
The Content-Location header identifies a URI for a content part, the part need not be universally retrievable using this base.
The Content-Base header identifies a base URI for a content, and for all objects within it which do not have their own Content-Base.
URIs in HTML-formatted messages and in Content-Location headers can be absolute and relative. If they are relative, and a Content-Base exists, they are to be converted to absolute URIs before matching with other body parts. If there is no Content-Base header, and no Content-Location containing absolute URIs, then exact matching of the relative URIs in the HTML and the Content-Location of the linked parts is performed instead (after removal of line folding).
We discussed some late suggestions for change in the mhtml drafts:
We did not decide to change the document because of these possible problems. The only change we accepted was the changing of references from referring to IETF drafts to referring to RFCs. The only problem here is the IETF draft on internationalization of the WWW. If this document does not soon become an RFC, we have to remove the reference to it in the SPEC.
We decided to keep the INFO document in an IETF draft stage for at least six months more, so that implementation experience can be gained. This means that the reference in the SPEC to the INFO document has to be removed.
One implementation is already ready, the one made by Mark K. Joseph. He has even implemented the complex case of a multipart/alternative inside a multipart/related Impressive!. Half a dozen other implementations are soon coming, so interoperability tests can soon start.
It would be nice to have a set of test messages to test implementations of the standard. I promised to provide storage for such test cases on our HTTP or FTP servers, but who will develop the test messages? Test messages generated by real implementations has the advantage of aiding in interoperability testing, and are also less error-prone. Manually created test cases have the advantage of allowing the creation of very special combinations which no existing mail UA may yet be able to generate. I did not promise to develop any test cases.
The chairman checked which ongoing implementations planned to use which of the implementation methods described in the INFO document, chapter 4. Result: All methods seemed to be used by some implementors. But Web4Groups was the only implementor which planned to base its implementation on a HTTP server. All other implementors were going to have their client software in the user workstation. This may be a reason for Web4Groups to consider developing the so-called Java client.
Do we need a WG on URLs?
How should we advance URL syntax to draft?
The practice of angle brackets round URLs will no longer be recommended. It is becoming historic.
Process for vetting new URL schemes. Should perhaps be shorter than requiring an IETF standards track document for every new URL scheme. Example: AOL URL scheme, implemented by America Online, is a proprietary protocol and never meant to become an IETF standard, but still needs to be registered.
Examples of new URL schemes: java:, clsid:.
The chairman asked: "Anybody have a new URL scheme?"
About 20 people put up their hands!
Is the generic URL syntax good enough so that authors of new URL schemes do not have to repeat it in their specs.
A good proposal for a URL registry scheme can be found at
http://www.apps.ietf.org/apps. Something Harald T. Alvestrand has written.
Until a new group has been established, the old procedure of requiring a standards track document for a new URL scheme will be followed.
Some people want URLs on things which are not Internet resources, like "Phone:", "Fax:". Is this something we want?
The "//" in the beginning of some URL-s is meant to indicate the top level of a hierarchical naming space, and thus to indicate that this is an absolute and not a relative URL.
There has been much disagreement on the mailing list. Two major documents exist with ideas which are very difficult to merge.
There exists already an implemented protocol called vCalendar, and a new protocol may be based on vCalendar but may do what the group likes. vCalendar has bugs, which the group has to get rid of. An issues list will be kept, and resolutions will be recorded on the issues list.
The editors of the two competing documents will be coeditors of a new document, arbitration will take place if they cannot agree.
This is a subproblem in CalSch, but a solution might be sought which is useful in other IETF applications. Many applications need an object specification method.
CSCT document is basis of the discussion.
Scope & application specification: Base on SMTP/MIME, IETF transport or industry standard. The third alternative won.
Ordering of properties: There is no required order of properties.
Strong, explicit data typing? Resolution: Data typing is optional.
Minimization of property name: Short names rather than easily understandable names, but still try to make them descriptive.
Local time values: Should they be allowed. Proposals, only UTC or allow local times which must have time zone offset from UTC. The second alternative accepted.
A speaker wanted to add daylight savings time (DST) rule to the times recorded.
Keith Moore said that every event should have a time zone associated with this event, and then all times regarding the scheduling of this event must be in the time zone of that event.
This has been a very contentious issue, and does not seem to be resolved.
Times must be unambiguous for all interpreters. Problem with this: Unknown dates for DST changes in the future means that some times cannot be unambiguous.
Can we agree on one true calendar (i.e. Gregorian), or multiple scales, coming from different religions. Definitions like "second Monday of each month" will mean different things with calendars based on different religions.
All dates must not be in UTC format, someone said.
Multi-valued property values: Yes, multi-valued allowed. Not resolved whether all the values must be of the same type.
Property normalization: How do we define a property? Example: Displayalarm (Dalarm) a complex data structure.
There was a long discussion on whether IETF should have a common data typing method, which can be used in all standards, or not. I felt like shouting "use ASN.1" but did not quite dare, that would be severe heresy in this organization!
Need for protocol: TCP-based or only e-mail? TCP-based has some advantages. You can get an immediate response to a request, i.e. a "busy" response. Also useful when you send requests to facilities, like to a meeting room.
If you send a request by e-mail, it may be lying for weeks, if you send it to their calendar application, at least you get a response that it has been received by their calendar application.
The intention is to try the calendar peer-to-peer protocol first. If it does not work, fall-back to e-mail occurs. Store-and-forward will probably be forbidden in the peer-to-peer protocol.
Some existing calendar systems are not mail-based.
Server ID resolution
Argument for using existing protocol: Every new protocol must be negotiated with firewall manufacturers and maintainers, and such negotiations are *not* fun.
Home page: http://www.ics.uci.edu/~ejw/authoring/
A lot of information is available from this web site!! Go there for more information than I write below.
Goal: To develop extensions to HTTP to support distributed authoring. This can include lock, copy, move/rename, resource redirect, containers, attributes, relationships, notification of intent to edit, destroy, delete, undelete, access control, use of existing authentication schemes, source retrieval, informing proxies.
Send a message with subject "subscribe" to
to join the mailing list.
A very aggressive working schedule, plans to have a standard ready in June 1997.
Working group meetings will be almost every month, not only at IETF meetings. Not a normal IETF working group, but they want IETF support of what they develop.
Three kinds of lock: Write, read, no-modify locks.
No-modify lock stops everyone from modifying a document, write lock restricts who may change it. This is useful because if a person modifies document A, which has links to document B and C, he can take out write lock on A, and no-modify lock on A. Then someone else can modify document C, with a link to B, without the risk that B is modified. If the first person had write-locked A, B and C, then the other person would not be sure that B was not going to be modified.
A long discussion about locking and authentication. This seems to be the kind of thing which people want to discuss very much. Perhaps Roland should submit to them his list of 46 different role elements, they might get quite excited about that!!
Partial write: If you have made small changes to a large document, you should be able to submit only the changes and not transmit the whole document.
It should be possible to make a copy (on the server) of a document on that server without having to download and upload the whole document again. It should be possible to change the name (URL) of a document without up/downloading of it.
A very long discussion about the rename (move) facility. There is obviously serious problems with this. What happens with existing links to an object which has been moved? There can be links to it almost anywhere in the world!
They seem to have thought of almost everything. Their terminology is sometimes different than our terminology in Web4Groups, but their ideas contain almost everything we have been thinking of.
The thing they do not seem to have considered enough is the groupware aspects: How to manage discussions during the work of modifying a document.
If Web4Groups had started our work on joint editing a year later, this might have been a very nice base for our work in Web4Groups (with our own group-support-oriented extensions) but this is probably too late now.
Note: All their work is directed as managing a document or a set of documents on one single server. No support for distributed servers, like we have in Web4Groups, is included in their work.
URNs are meant to be more location-independent names of documents than URLs. A document whose location changes, could keep the same URN. A user who wants a document first asks for its location from an URN server, and gets in return an URL to retrieve the document. The actual document might be stored in several places, and the URN server return the URL of the closest of them.
This work has been reported on many IETF meetings, let us see if they have results to report now.
Requirements: evolution (how go from an experimental system to something more used), usability (clients, managers), security. Someone in the meeting said that internationalization ought to be covered.
The NID is looked up in a global registry that will send us to some private URN resolution scheme or to an intermediate UDS server that will tell us which server will resolve the NID.
There can also be a global NID registry, for rapid lookup of commonly used NIDs.
Access control on hints, especially protection against unauthorized modification. The provider of a hint should be able to stop others from changing it.
Server authenticity: (masquerading).
Server distribution (denial of service).
Privacy: Protect users from resolution services, protect users from intermediate services like firewalls, protect publishers.
Non-issues today: Some general requirements on distributed systems. There is no general good such document from the IETF.
To be discussed via email: Separation of URNs from semantics. Efficiency as a goal or requirement. Should we distinguish between long-term and short-term hints? Management of usage. Acceptance by potential providers. Security and privacy.
Shall tag be mandatory, optional, not used at all?
Why not numbers and "-" in the name space.
% can be used to escape case insensitivity. NSS is case sensitive. % is allowed, but only for escaping, not as a regular character.
Where does a URN end? This is up to each separate encoding scheme. Angle brackets (< and >) should be used (different from URLs where the requirement for angle brackets will be removed).
Two URNs can point to the same resource but not be identical. Example: "The weather in San Jose on December 13", and "today's weather in San Jose". On December 13, but only that day, these two URNs point to the same resource. Equivalence will be removed from the document because of these and other problems. Namespaces can define their own equivalence rules if they so prefer.
The reason NAPTR will be an experimental and not a proposed standard is that NAPTR is based on the DNS, and some people say this is not a good idea. Some people say that the DNS is already so fragile and overloaded that one should not put new burdens on it.
Discussion item: Should we require all URNs to start with "URN:". This is good, but commercial implementors might be expected to allow abbreviated URNs without this start string.
URNs will anyway be ugly and not user-friendly, so no problem requiring "URN:" as a prefix, said someone.
This caused a very discussion. Typical of the kind of issue which causes a lot of discussion.
Someone said: I remember exactly the same discussion two years ago.
A BOF was held to start up an IETF working group on Internet printing, i.e. how a client can contact a printer through the Internet and have a document printed.
The group was headed by Carl-Uno Manros, previously a leading developer of OSI standards, and the group plans to develop a standard using the OSI DPA (Document Printing Application) as a basis. This is an interesting tendency, which we will probably see more of, where developers of OSI standards will want to convert their standards to IETF standards. One could feel a certain skepticism in the group from IETF veterans.
The group will however not copy directly the OSI functions. The new protocol will be simplified to only contain the most important functions, it will use ABNF encoding methods, not ASN.1, and it will be based on using existing Internet protocols to a large extent. In particular, HTTP will be used as the protocol to communicate between user agent and printing server. User control of printing, like locating printers, checking their capabilities, submitting print jobs, checking print queues, canceling print jobs, etc., will be performed using ordinary web browsers and HTML form based user interfaces. But they also planned to download printer drivers from the net to the personal computer of the user. I did not quite understand what was to be done by special-purpose clients (printer drivers) and what was to be done using ordinary web browsers.
The work was supported by an impressive number of manufacturers of computers and printers, like IBM, Lexmark, Canon, Xerox, etc.
IETF drafts are already available specifying user requirements (draft-wright-ipp-req-00.txt) and a first protocol draft (draft-isaacson-ipp-info-00.txt).
Goal: To be able to communicate to or from ordinary fax machines via the Internet. Mainly sending fax via the Internet but doing the final delivery via fax, although other architectural combinations are possible.
Two main modes: Store-and-forward, via e-mail, the whole message is stored before it is delivered. This is the primary objective of the group, and the primary storage method in this mode will be TIFF-F, which is a variant of TIFF specially designed for fax.
Another mode, called streaming, means that, just like for ordinary fax machines, you are transmitting and printing without even waiting for a whole page to be read. In this mode, TIFF-F does not work, since it requires a page count before the document is sent.
There are various intermediate possibilities between store-and-forward and streaming.
There also seemed to be agreement that machines which transmit to fax machines should be able to handle text/plain. Of course many products will be capable of handling many more formats. Which character sets?
There is a possibility that TIFF-F has intellectual property rights such that IETF cannot use it. This must be clarified.
I tried to ask why fax could not be seen as a special case of printing, but Harald T. Alvestrand did not want to discuss this. He was probably right. In IETF, one tries for simple standards to welldefined problems, not very general-purpose standards. Still, there are a number of similarities, for example the need for que handling and to retract a fax sent via Internet but not yet forwarded to the fax machine. Such facilities would, by the way, also be very useful to have for e-mail. I want to take back an e-mail message I have sent recently at least once a week. I guess other people have similar experience. (The Eudora e-mail system has a partial solution to this. You can set Eudora such that when you send an e-mail, it is just stored for sending, and the real submission will be done at a later time.)
One thing I learnt was that fax is not 200 BPI horizontally, it is 204 BPI horizontally. And vertically, it is not 100 or 200, it is 98 or 196.