IETF logo

Applications IETF
August 2000 Notes

Personal notes from some
applications area sessions during
the IETF meeting in
Pittsburgh, July-August 2000

By Professor Jacob Palme,
Stockholm University and KTH Technical University


  Here are my personal notes from the IETF meeting in Pittsburgh, July-August 2000. Opinions and statements in these notes are not mainly my own, but rather quotes of things I heard people say during the meeting, even when I do not use quote symbols or specify the name of the person being quoted. The selection of what to report is based on my personal interests from some of the meetings, mainly in the applications area. In some cases, what I quote below are statements, which sounded interesting to me, but not always statements I understood fully.  

Table of contents


General Introduction


Property rights issues related to IETF standards were stressed very much on this meeting. Anyone active in IETF work has to promise that all his/her contributions can freely be used, published and implemented without royalty as ISOC and IETF wishes. This not only applies to proposals, but to anything said during an IETF meeting or in an IETF mailing list.

Applications Area General Meeting


Chairs: Area directors, Patrick Fältström and Ned Freed. Ned Freed was stuck at O'Hare airport because of air traffic congestion problems, and would not arrive until the afternoon.

People complain of problems with delay of publishing papers as RFCs. To help this, a web page will be developed to contain a list of current documents and the current "token holder", i.e. the person for whom input is expected to further process this document.

There was a long discussion about the software used to maintain this page, how it is written, what requirements should be put on it, etc.

Architecture things: How to find a resource and find out how to access that resource. Possible systems to use: DNS (known, deployed, distributed), NAPTR (not just for URNs anymore), draft-ietf-urn-ddds-00.txt, draft-ietf-urn-ddds-database-00.txt. This is not related to the normal DNS usage.

Dynamic Delegation Discovery System (DDDS). DDDS is not dependent on DNS, but most groups seem to plan on using DNS for it. This is a controversial issue. DNS is not, some people said, reliable enough.

Plans are to use DDDS to find "where" and use RESCAP to find out "about".

BOF's this week:

  • Spatial location
  • LDAP version 3 revision BOF
  • Content Negotiation Headers in HTTP

Status of some working groups:

Calendaring and Scheduling is trying to close down, but every time they publish a document there is a flurry of input from vendors working on implementing it.

Content Negotiation is not meeting this time, they are waiting for final IESG review.

DRUMS is close to last call on the SMTP document, the MSG-FMT document already has passed last call. Various jokes relating to the fact that DRUMS does not ever seem to be ready with its work.

EDI-INT is not meeting at this IETF conference. Several documents in IESG review.

FTP extensions is not meeting, no one seems to know anything about it.

HTTP not meeting. Problems finding the cookies.

Instant Messaging and Presence (IMPP) has been very active. Problems getting consensus on which transport protocol to use, but TCP seems to be winning.

Internet Open Trading Protocol (TRADE) all the main protocols are ready, discussions on further work.

LDAP extension: Working group has been closed for two years, but the stream of proposals for changes are as many as when the group was active.

Message Tracking Protocol (MSGTRK): Close to last call. Close to consensus on the protocol documents. We hope to be ready before the end of this year.

Internet Printing Protocol (IPP): Two main documents accepted as standards, waiting for the RFC editors, eight or nine additional document under considerations. Discussion on whether to close down the group and start a new group for IPP extensions, or to let the group continue for a couple of more years. The group is very active, many documents have been published this year.

NNTP Extensions (NNTPEXT): After a lot of silence a new draft was published.

Resource Capability Discovery (RESCAP): Not meeting.

Telnet TN3270 Enhancements: Not meeting.

Uniform Resource Names (URN): Not meeting. This is closely related to DDDS. Discussions to close the URN group and let the DDDS group do the rest of the work.

WEBDAV is considering access control.

DNS needs enhancements, technical needs, but problems with risk of breaking existing infrastructure if DNS is modified.

How to close working groups:

You cannot close a working group unless all IETF drafts for this group are accounted for (published as RFCs, passed as standards, or denoted by the WG).

Back to table of contents @ Other meeting notes

Internationalized Domain Names (IDN)


This is a very sensitive issue. The DNS system is very central to the operation of the Internet. Anything which might cause instability to the DNS can be very dangerous. On the other hand, people outside the English-speaking countries do feel the need to have domain names in their native languages. It influences almost every application, since domain names are used in almost every application. Application protocol developers should think about how new domain names (international characters) will influence your applications.

It is also very complex. Every language has its own rules for character set and canonicalization, we heard long speaks on different Asian languages and there particular problems.

And it is very controversial. There are a large number of different proposal for how to handle it. Lot's of speakers at the IETF meeting presented different proposed solutions.

Most proposals seem to be about mapping domain names in extended alphabets onto ASCII. The proposed mapping seemed mostly to create an almost unreadable ASCII-text, like using BASE64 or something similar. This may not be so user-friendly, applications which do not support internationalized domain names will show them in this un-neat format.

Nearly 200 participants came to this meeting. And two meetings were scheduled on different days, indicating that the IESG finds this to be a very important issue.

Some ISPs have started to promote non-English characters in domain names without any standard. The standards people feel that this is very dangerous.

Requirement: Not disrupt the current operation of the DNS. Good communication with interested groups is wanted. is the web site of the group. Managed by the chairs of the working group.

They are trying to delete requirements, because the requirements list is so large that no system can satisfy all its requirements.

Keith Moore said that we must consider the impact on users. A protocol which only works technically will fail if it does not work with users. (That is something I have often wanted people to say at IETF meetings.)

The working group does not intend to ignore these issues, said its chairman.

The domain name as written on a business card is going to be the test of whether our standard succeeds said another speaker.

Requirement 1: DNS is essential to the entire Internet, the changes must not damage this in any way. Resolving of names must work as well with the extensions as before.

Only Unicode (version 3.0) is recommended. Having multiple character sets is going to be very problematic, and can only be avoided if unicode is used. Bi-directionality will be supported.

Normalization will have to occur. We do not like to see different rules for normalization of names in French-speaking Quebec as in France.

The basic assumption is either to change the working of the DNS, or change the mapping of data on what is sent through the DNS. Answer: We are not here to change the DNS.

What works now shall continue to work and not have to work differently in the future. We cannot solve a problem for our applications by a change in the lower layers. Ideas like two-step DNS and directory front-ends are discussed. This is more than just hacking characters onto the DNS. We should see this as new services, not as changes to old services.

There is missing requirements for incremental employment. How can the new services be introduced stepwise without disrupting existing DNS services?

Row-based ASCII Compatible Encoding for IDN (draft-ietf-idn-race)

  • Allows internationalized name parts in current DNS systems. Can be used by itself, does not require a binary format.
  • Fully compatible with today's DNS.
  • Three step process to convert to RACE:
    • Compress input text (absolutely unique, only one allowed compression of one input string)
    • Convert compressed string to Base32 (which loses about 40 % efficiency)
    • Mark with a prefix, currently "ra~~".

People seem to be concerned with the risk of creating long domain names and domain labels, with applications not able to handle such long names or name parts.

Using universal character set data in the DNS

Plans to use an unused "IN" bit in the second word of the DNS to indicate that a name is IDN compatible.

Canonicalization rules stored in the DNS itself

A major issue is extended case equivalence rules. Just like "APPLE" is identical to "apple" in domain names, also "Ærta" is equivalent to "Ärta" and "Idee" is equivalent to "Idée". This is very complex, and the rules vary with the language. For some Asian languages, it is impossible to specify automatic algorithms for such equivalencies.

There is going to be different local rules for different regions. There is a need to dynamically download the appropriate rules for a certain region.

The canonicalization rules can be stored in the DNS itself. A special top-level-domain "" might store the canonicalization rules. For each character, a DNS lookup is made to find out how to canonicalize that character.

Each top-level domain will have its own canonicalization rules, for example ".jp" domains will be handled by the canonicalization rules for the Japanese character set. Would this mean that for example Swedish words are not allowed in domain names under ".jp"? Is this acceptable?

There is a problem with risk of exceeding packet size limits for DNS.

This presentation caused a lot of discussion:

  • Faulty use of leaf and non-leaf nodes in the DNS.
  • Why forbid registration of Swedish-language domain names under the ".JP" top-level domain?
  • How to handle several different languages in the same country, possibly separate subdomains for areas speaking different languages.
  • Problems with complex code in clients, which may have to be updated quite often.

The Han ideograph for IDN

Each language user should investigate the problems for his language with IDNs. Here is one speakers discussion of one such issue.

The han Ideograph (used in China, Japan, Korea and some other neighboring Asian countries) is very complex and the case equivalence rules are ambiguous and unclear. There are four different approaches for code-base substitution, character-by-character, lexicon-based and context-based.

Chinese has "traditional Chinese" and "simplified Chinese", introduced in the 1950 to simplify the character rules, to make it easier to learn, read and write. For example, the "rain" element was removed from the pictograph for ""lightning". There are 2244 simplified characters, 2145 are listed in Unicode. Conversion from simplified to traditional form impossible without knowing the context.

Person in the room said that he did not agree that TC to SC translation is simple and can be allowed in the DNS.


The DNSII-MDNP method claims to solve all problems, according to its own proponents, but it is patended and all users must pay royalties to the company owning the patent.

Criticism: Requires all software to handle all the rules for all the different languages.

RACE vs. UTF-8 vs. UTF-5

Evaluation of proposed encoding for IDN by mDNkit.

mDNkit = multilingual Domain Name evaluation kit. It was developed by the Japanse IDN group.

Back to table of contents @ Other meeting notes


Spatial Location BOF


Second BOF.

Purpose of SLoP: To have a standard way for an application to acquire the spatial location of an identifiable resource over or represented on the Internet, in a reliable, secure and scalable manner.

A spatial location client contacts a spatial location server to retrieve the spatial location of a resource. The resource itself is not involved in the standard protocol.

Comment from the floor: Spatial information may be useful as an add-on to several existing protocols.


  1. Spatial Location Representation
  2. Representation Negotiation
  3. Security mechanism
  4. Policy mechanism
  5. Server discovery mechanism
  6. Transmission and reliability mechanism
  7. SLoP Message coding Mechanism


Target: An entity whole location is known by the server and desired by the client.
Server: Entity which supplies the location of the target to the client.
Client: Entity which gets the location of the target from the server.


GPS has one format, WGS24. It has some problems, because the earth is not a perfect spheroid. A quite different method would be a spatial location based on country, state, city, etc. The protocol requires everyone to support WGS24, but other options will be allowed, and there will be a registration procedure for these other representations. The protocol must be able to support multiple representation of the spatial information.

Other issues: Time, velocity, direction of movement.

We are not experts on geodesic information, and we will not invent a format, we will use a known standardized format.

Naming target is a big problem.

One person said that the "presence" protocols developed by another IETF group seems to be doing the same things. Disagreements on whether spatial location is or is not included in the "presence" protocols.

Must support multiple formats, one a very simple format, must allow a timestamp, should allow specification of precision, must allow delivery of spatial location in multiple formats.

Negotiation: The server telling the client which spatial representation it supports, the client select which representation(s) it wants, provide a mechanism for periodic updates.

Server discovery: Must be a way of finding a suitable server for getting the location of a certain target. Servers are identified. Server names should be globally unique.

Transport: At least UDP, should support TCP. Some discussion about possible support for additional transport protocols.

Security: Mandatory-to-implement, optional to use. Must describe how it works across firewalls. Should support anonymous use as well as authenticated use.

Question from the floor: Can you identify a certain server as an authority for the location of a certain object.

Policy: Policy enforcement point, specify how servers obtain policy from a policy storage facility. Policy can include accuracy, frequency of report generation, representation of format.

Back to table of contents @ Other meeting notes

Internet Open Trading Protocol


Task: Sending transactions, optimized for those produced when you are ordering and paying for a product over the Internet. See

This working group has been in operation for several years, several of its documents have already been approved as proposed standards. Present non-finished work is on how to discover if a certain client supports the IOTP protocol or not, and using SET over IOTP.

The ECML alliance (http:/ will produce XML based ECML version 2.0. The client is named "wallet" and the server is named "merchant". A wallet can ask a merchant to send something to a certain address, bill it to a certain billing address, etc. They can handle bills, payment receipt, delivery notes, loyalty points and member certificates.

Items for version 2:

  • Separation of content independent messaging layer.
  • Dynamic definition of trading sequences.
  • Offer request block.
  • Improved problem resolution (extend to cover presentation of signed receipt to customer support party, etc.).
  • Server to server messages.

Back to table of contents @ Other meeting notes


NNTP Extensions WG


NNTP is a protocol used for news clients (newsreaders) to communicate with news servers. NNTP is also one of several protocols used for news servers to communicate with each other.

The NNTP Extensions WG has the task of revising the NNTP protocols and also to add some new functionality as well as an extension mechanism for future extensions.

The group has been working for several years and is mostly finished.

Very few people came to this meeting at this IETF conference. Only 19 people were present, and only 3 of these took seats in the first two rows (indicating that you are actively participating in this group).

Issues at this meeting:

  • The wildmat format, a format for wild card searches of information from an IETF server, such as newsgroups. Certain details are being discussed.

  • The MODE READER command is used for the client to tell the server whether it is a newsreader or another news server (=feeder). The server can reply that it only supports one mode, i.e. only supports newsreaders or only support other news servers.

The mailing list has been very silent, but in the last months there has been a sudden explosion of activity in the mailing list.

Back to table of contents @ Other meeting notes


Instant Messaging and Presence Protocol (IMPP)


This group is working on protocols for systems like ICQ, which allow people to know if their friends are on-line and to chat with them. See Previous discussions. There are lots of different proposals. Does this reflect different real requirements? Can we at least agree on the requirements, or understand why different people have different requirements? Even the IESG and IAB has had lively discussion on this.

We must agree on one choice of transport protocol, one data model format and extension mechanism.

SIP = Session Initiation Protocol. Builds on RFC 2778, 2779. Realized through SIP extensions. SIP is one possible base for IMPP.

SIP issues:

  • Transport of instant messages.
  • Subscription and notification mechanism.
  • Minimal client support (not including voice and other advanced services).
  • Gateways between IMPS and other IM systems, especially security issues for gateways.



A prototype client is already running.

A demonstration was running during the meeting. Different people in the room sent messages through wireless connections to the server and the result was shown on the screen during the meeting. Rather convincing demonstration.

This is instant, not standard store and forward messages.

Back to table of contents @ Other meeting notes


Calendaring and Scheduling (CALSCH)


This is also an old group, which has been active for several years. Only 22 people present at the start of the meeting, increasing to 28 people at the end of the meeting - old groups tend to get less participants than new groups.

Interoperability testing has started, two vendors are working on solving problems with interoperability, with good results, more vendors are interested.

Issues noted during interopeability testing:

  • Repeating and canceled events
  • Recurrence
  • Parsers working OK, however, data objects not always consistent
  • iMIP and MIME handling
  • Alarm not tested
  • No length conformance testing
  • No vendor has implemented JoURNals (=records of where you were at certain times)
  • No testing of signed or encrypted objects
  • No Scheduling tested

They have an implementor's guide, which is liked by implementors, but which requires more experience reports.

iTIP is a scheduling protocol. It is independent of transport protocol, but is at present only available for e-mail. Support for a faster transport protocol than e-mail is wanted.

There was some discussion on CAP versus CRISP, and whether to combine them into one standard or not. The Calendar Access Protocol (CAP) is an Internet protocol that permits a Calendar User (CU) to utilize a Calendar User Agent (CUA) to access an [RFC2445] based Calendar Store (CS). I do not know what CRISP is.

Left to do: IANA properties and restriction tables.

Restriction tables

Define which attributes must be present and can be present more than once in commands send to a CAP server and in responses returned from the sender.

A create command can get many different response codes, since you can create many different objects with a single create command.

There is no requirement to support rollback. So if I send many commands at once, and only some of them were successful, than there is no way of automatically undoing the successful ones if not all succeeded.

Some attributes are calculated only: You can retrieve them but not store them explicitly.


If you pipeline multiple commands, you can put a command-id on each command, to be able to sort out which response is to which command. Pipelining without such command-ids is not permitted. From the discussion I understood that the pipelining facility (sending multiple commands without waiting for the answer to one command before sending another command) caused a lot of problems.

Interoperability testing

Present at interoperability testing meeting: Lotus (organizer), Microsoft (Outlook, Outlook express), iPlanet (Netscape), Critical Path (EventCenter), MIT. There is sensitivity about vendors participating in interoperability tests, especially if non-released products are to be tested. An agreement in advance may be needed on what to publish of the test results. Some vendors may refuse to participate in such interoperability testing.

Back to table of contents @ Other meeting notes


Blocks Extensible Exchange Protocol (BEEP)


This was another of the main events during this IETF conference. There has been many initiatives over the year to develop some kind of "base protocol" which can be used when someone needs to develop a new protocol for a new application. They have not been very successful in becoming IETF standards. The common method when people develop new protocols today, is to use a suitable simplified and modified version of HTTP, and sending with this object which often have the XML format. Sometimes, the new protocol is defined on top of an e-mail infrastructure instead of HTTP.

The Blocks Extensible Exchange Protocol (acronyms BEEP or BXXP are both used) is a rather different attempt at providing such a general-purpose protocol. BEEP is not suitable for any application. It is mainly oriented towards applications where two servers need to establish a long-lasting connection, and within this long-lasting connection be able to multiplex several different conversations simultaneously. Central to BEEP is thus the multiplexing of several different conversations over one TCP connection. Other facilities will be security features based on SASL. Security may be established once per session, and then that security may be used for all which follows in this TCP session (and in the several parallel sub-sessions within this TCP session). Additionally, BEEP has support for framing, i.e. marking the end of data objects without restricting the content of the objects. Framing is a big nuisance in most applications, and the use of "single-line-dots" and "MIME body part separators" are examples of not-very-neat facilities you need not have to develop if you are using BEEP.

BEEP is planned to work also on other underlying transport protocols than TCP.

BEEP is not suitable for one-shot applications like DNS lookup. It is not suitable for tightly-coupled RPC applications like NFS, and it does not support multicasting.

When you start a BEEP session, only one channel is open, channel 0, the BEEP administration channel. Within this channel, you can use XML-coded operations to start new channels and perform other BEEP administration. The other channels may or may not use XML depending on what the protocol designer prefers.

BEEP has some redundancy to make it easier to avoid mix-up of data from different channels. There is a numbering of blocks, and some redundant end-of-frame markings for this purpose.

Example of an error message in BEEP when using XML encoding:

<error code="421" xml:lang="en-US"#>service not available</error>

Discussion from the floor: If you want to do this over UDP, you need reliability mechanisms, more than what BEEP has today. If you do it over TCP, you will not have the capability of getting different packets in random order and combining them together in your application, since TCP will enforce sequential handling of packets sent.

Answer: BEEP at present only supports SCTP and multiple TCP connections. UDP may be added later.

Discussion from the floor: BEEP requires a reliable sequence-preserving transport protocol.

Discussion from the floor: Congestion control must be moved up to the application. This caused a long discussion with various opinions stated by different people. Congestion control can be moved to the application if the application is able to handle it, otherwise it will be handled in simplistic ways by BEEP itself and by underlying layers.

Discussion from the floor: Is flow control done in the application or the transport layer?

Issues list:

  • Serial/Sequence Numbers

    Serial numbers are used to distinguish between different requests, sequence numbers for consistency checking. These can be viewed as redundant, should their use be reduced?

  • Invalid RSPs

    What happens when a channel gets an invalid (malformed or inappropriate) response to a request. Should the framework explicitly address this?
    Voice 1: Experience says the application should do nothing, and should definitely not try to understand what was meant.
    Voice 2: Say that something is bad to the other end. This will make debugging easier, especially for a developer who only controls one end of the connection.

  • Mime Headers

    MIME headers are only allowed in the first frame. Should this be explicitly explained and consistency checked?

  • REQs and RSPs

    Should the framework allow for non-responded requests, and a way to indicate that certain requests should not result in a response? Should this be indicated in the protocol over the wire, or only specified in the application specification document.

  • Many Responses to one Request

    Question: I may send one request to get status responses every 10 seconds.
    Answer: The requests starts a process, which sends the status responses as a series of "requests" in the other direction.

  • Profiles -- Default Content Type

    Should the profile allow a profile's registration to specify the default Content-Type?
    Advantage: Reduces overhead of sending the Content-Type header on the wire.
    Comment: Makes debugging by looking at the information on the wire more difficult.

  • Profiles -- Default Charset
    The profile is silent with respect to the default character set used for profiles that are defined using XML.
    The default character set value for an XML document is UTF-8, but the rules for using XML as the content-type for a profile.
    Should the framework explicitly permit each XML-based profile to specify a default character set?
  • Channel management -- Closing Channels
    The framework does not contain any notion that a channel may be closed after it is created (except by releasing the whole session). Advantage with closing channels: Buffers can be released.
    Should the framework allow channels to be closed, and if so, how?
    Comment: Should there be any requirement on a the number of permitted concurrent channels?
    Comment: Related issue: Can closed channel numbers be reused. Or if channels cannot be closed, can a not-any-more used open channel be reused?
  • Error codes

    SASL and TLS use three-digit error codes. Other method are extended numerical codes (e.g. 5.5.12) o tokenized (e.g. "syntaxError"). Other profiles can use any kind of error codes they prefer.
    Should other error codes be used for SASL and TLS?
    Chris Newman: Tokenized is nicer, but it should allow a hierarchical structure (such as by the first digit in numerical error codes).

  • TCP Mapping -- Silly Window Syndrome
    Should the tcpmapping document include text like in RFC 1122 that more fully discuss window management techniques?
    The discussion was mostly about whether one standard should copy text from another standard. This is bad because (i) you may make mistakes (ii) the referred standard may be corrected or extended, and these extensions are not automatically conveyed to the standard copying from the other standard.
  • TCP -- Shrinking the Window

    Should the tcpmapping document include wording similar to that in the relevant sections of RFC 1122 that makes it clear that the window may be advanced only to the right (i.e. not allow shrinking of windows).
    Comment: Shrinking windows can be needed and should be allowed.

  • Reduce security on some channels

    Should it be allowed to reduce the security in some channels, for example not encrypting info sent in some channels for efficiently reasons?

  • Error messages per message or channel
    There are two kinds of error responses: Errors for a message and errors for a channel. A channel error requires the closing down of the channel.

Mapping LDAP onto BEEP

To help clarify BEEP requirements, the speaker had tried to map LDAP on top of BEEP. LDAP differs from most other protocols in that it uses ASN.1/DER for encoding, has parallelism, authentication/authorization with anonymous access, bypass or SASL, and TSL for transport security.

Just like BEEP, LDAP allows multiple requests. A problem: A Search request in LDAP can return some results immediately, and return more results later on. LDAP even allows persistent searches, which are alive all the time and return results as they occur in the directory. LDAP also has requests without responses, which might be mapped on BEEP requests with empty responses, but this is not so neat.

Question: Is BEEP general enough?

Back to table of contents @ Other meeting notes


Extending HTTP for Content Adapation (BOF)


Problem: Allowing client devices to describe themselves, as well as the user preferences for the use of these capabilities.

This is an area where many previous attempts have been done by various IETF and W3C groups, and much of the meeting was a presentation of previous attempts and discussion of their merits.

Content adaptation an be personalization, as well as adaption to a presentation format. Idea is to create a "contract" between client and server.

Problem with HTTP: Wrong assumption: Bandwidth is the problem, when the real problem is latency delays.

Different from ICAP: Not only a data base of capabilities.

SIP (draft-nishigaya-sip-ccpp-00) might need similar capabilities.

Public mailing lists:
     subscribe through:

P3P1.0: The Platform for Privacy Preferences

Web sites disclose their privacy practices in a standardized format, with an XML format for expressing the privacy policy, and a way of associating privacy policies with Web pages or sites.

The need: To transport this information over HTTP.

Advantage: Send reference instead of the full text of the privacy policy. Allows user agents to determine what policy applies to what resource.

Choices: Use HTTP extension framework (RFC 2774), or just declare one or two new headers. Very controversial. Most people prefer just adding new headers, but some people very much want to use an HTTP extension framework.

W3 CC/PP working group report

Based on RDF, a collection of components and their properties, using inline and indirect references. Wants to reduce the overhead by sending indirect references. Need initiated from the mobile community.

Conneg working group report

Content negotiation is an exchange of information which leads to selection of the appropriate representation (variant) when transferring a data resource (RFC 2703).

Three steps:

  • Express capabilities of a receiver,
  • Express the capabilities of the sender and the data to be transferred,
  • A protocol to transfer this information.

Internet Content Adaptation Protocol (ICAP)

A method for adapting HTTP requests/responses to particular needs. Can ship a request or reply to an adaptation server, but does not deal with client capabilities or adaptation rules.

More info: See http://www.i-cap-org.

| ^
| |
v |
ICAP-client   -------------> ICAP resource on ICAP-server
(proxy-cache) <------------  on ICAP server
| ^
| |
v |

Unmodified HTTP servers may provide ICAP services in some modes.

Larry Masinter said that instead of:

Accept: text/html
Accept-charset: UTF-8
Accept-language: fr, en
User-agent: mozilla ... 800x600

one might use something like:

Accept-features: (&type=text/html,...)
(charset=UTF-8 ...)
(height<600) ...

Conclusion: We are going to prepare a charter and a little more than a requirements documents through the mailing list.


Back to table of contents @ Other meeting notes