XML and Web Services In The News - 4 January 2007

Provided by OASIS | Edited by Robin Cover

This issue of XML Daily Newslink is sponsored by IBM Corporation



HEADLINES:

 Introducing XML Internationalization
 HTTP Extensions for Distributed Authoring: WebDAV
 The XQuery Chimera Takes Center Stage
 UBL Methodology for Code-list and Value Validation
 Patch Issued for OpenOffice.org WMF Vulnerability
 Rogue Wave Software Adds New Partners for Rogue Wave Hydra Suite
 Analyst Predicts Eightfold Increase in New Storage Capacity by 2012


Introducing XML Internationalization
Hernan Silberman, IBM developerWorks
XML has become a trusted technology to represent and transmit information of all kinds. XML's designers were visionary in making XML flexible enough to support multiple languages and character encodings-features which make it especially suited for applications that work in multiple locales. Today, XML is the fundamental technology driving the internationalization of applications in the increasingly flat world. But do you really understand the concepts of internationalization and localization? Internationalization is a design approach which anticipates the adaptation of a product to multiple different geographic regions and cultures. Localization is the act of specializing a product for a specific locale, a task that is much easier if it follows an internationalization effort. You saw that XML was designed to support international use and thrives because of its support for multiple character encodings and Unicode, and because of the xml:lang tag which can be used to identify the language used in a given document. Recent developments in XML internationalization include the development of the Internationalization Tag Set (ITS), which provides a standard set of tags for identifying the portions of a document that need to be translated and various additional tools which enable internationalization of XML documents. This article explains what they are, how they work, and why you want to use them
See also: the Internationalization Tag Set

HTTP Extensions for Distributed Authoring: WebDAV
Lisa Dusseault (ed), IETF Internet Draft
The Internet Engineering Steering Group (IESG) announced receipt of a new WebDAV HTTP Extensions submission for consideration as a Proposed Standard. The Internet Dtaft has been produced by members of the IETF WWW Distributed Authoring and Versioning (WEBDAV) Working Group. The IESG plans to make a decision in the next few weeks, and solicits final comments on this action. The "HTTP Extensions for Distributed Authoring - WebDAV" specification consists of a set of methods, headers, and content-types ancillary to HTTP/1.1 for the management of resource properties, creation and management of resource collections, URL namespace manipulation, and resource locking (collision avoidance). While the status codes provided by HTTP/1.1 are sufficient to describe most error conditions encountered by WebDAV methods, there are some errors that do not fall neatly into the existing categories. This specification defines new status codes developed for WebDAV methods and describes existing HTTP status codes as used in WebDAV. Since some WebDAV methods may operate over many resources, the Multi-Status response section has been introduced to return status information for multiple resources. Finally, this version of WebDAV introduces precondition and postcondition XML elements in error response bodies. WebDAV uses XML for property names and some values, and also uses XML to marshal complicated request and response. This specification contains DTD and text definitions of all all properties and all other XML elements used in marshalling. WebDAV includes a few special rules on extending WebDAV XML marshalling in backwards-compatible ways. Finishing off the specification are sections on what it means for a resource to be compliant with this specification, on internationalization support, and on security. While the WEBDAV working group was originally chartered to produce a draft standard update to RFC 2518, this documented is being targeted as a replacement Proposed Standard because of a number of substantive changes to the original semantics. These are summarized in Appendix F, but a full review of the document is required to see the entire scope.
See also: WebDAV Resources

The XQuery Chimera Takes Center Stage
Simon St. Laurent, XML.com
For the first time in many years, I left an XML conference thinking that XML might actually finally change the Web significantly — and soon. XML still isn't likely to change the Web much on the client side, beyond the role it plays in Ajax and related technologies. Even that role is likely to be reduced by JSON. The dreams of XML hypertext are dead, or at least thoroughly dormant. The changes I saw at XML 2006 that are driving XML deeper into the Web seem likely, for now, to operate mostly on the server side, as XQuery both brings XML databases to a wider audience and combines access to relational data and XML. XML has never worked neatly with the heart of most web applications' architecture, the relational database. XML's hierarchical structures map poorly to relational database structures. You can, of course, create table- and record-like documents that fit easily with relational databases, but that's a fairly tiny if important subset of XML possibilities and documents. Web applications built on relational databases can and do use XML, of course. Applications routinely generate XML from query results, and import XML documents by shredding them into pieces spread across tables. The more complicated the document, the more likely that multiple tables will be involved, or that it will prove easier to store the XML as a BLOB or a separate file. Relational databases aren't likely to go away any time soon, however. They're far too good at storing structured data, scale better than the alternatives, and offer much more flexibility than most people know what to do with. XQuery can work with them; it just offers new options, making it easier to optimize among relational databases for structured data and other kinds of data storage for more loosely structured hierarchical data... XQuery itself isn't about the Web — it's about collecting information from various sources. However, it also provides templating facilities like those of XSLT, and is perfectly capable of generating XML or HTML. Where traditional scripting languages have split querying from the application and presentation logic, XQuery lets developers combine the query with the result generation... XQuery use has another side benefit: cleaner XML than that produced by a lot of current scripts. XML well-formedness is a natural side-effect of using XQuery, and even with the mixing of presentation and query layers, converting XQuery that generates HTML to XQuery that generates XML is not particularly difficult. Perhaps this will accelerate the shift toward making data available without an HTML wrapper.
See also: W3C XQuery references

UBL Methodology for Code-list and Value Validation
Rick Jelliffe, O'Reilly Reviews
Ken Holman sent me copy of the latest draft of the OASIS/UBL Methodology for Code-list and Value Validation, which is a pretty good use of Schematron. It looks like a neat and workable solution to a problem that is somewhere between baroque and a hard place using XSD. Imagine you are a trading company: you have documents which various fields for countries: countries you can send from, countries you can send to, countries the US won't allow you to export to, countries you can use as hubs, countries with regional offices, etc. And you also have lots of other documents with similar or different sets of countries. And countries are only the start: you also have product codes where different fields can have different sets of codes, and so on. And this may vary according to where the document came from (the Libyan branch office may have different rules from the Alaskan branch office). And, of course, the values of codes may have interdependencies, such as "the source must be different from the destination." So lots of uses of a standard vocabulary, but lots of local and changing subsets that are much closer to "business rules" than "datatypes". If you used XML Schemas, you could theoretically derive by restriction all the different subset codes, then use "redefine" on every top-level element that used the subsets. You'd have to do this redefine on base types where possible, so that subsequent derived types would inherit the restriction, perhaps, except then you'd have to check that any subsequent derived types that themselves define restrictions are indeed subsets. Have a breakdown and a good cup of tea. With the Schematron approach, you select the items from the code list you want, and some magic tool provided by the methodology generates the Schematron code, which just uses simple XPaths.
See also: Code List Representation Requirements

Patch Issued for OpenOffice.org WMF Vulnerability
Jeremy Kirk, LinuxWorld.com
A patch has been widely released for a vulnerability in the OpenOffice.org productivity suite, a problem rated as "highly critical" by one security vendor. The flaw could be exploited by creating a malicious file in the Windows Metafile (WMF) or Enhanced Metafile (EMF) formats. If the file was opened by a user, it could start running unauthorized code on a computer, according to an advisory by Linux distribution vendor Red Hat, which offers the OpenOffice suite with several of its products. OpenOffice.org is a free software suite that includes a word processor, spreadsheet and a presentation program. Red Hat rated the flaw as only "important" since a user would have to open a malicious file, Cox said. Red Hat users will either receive an update automatically or notification to upgrade their software, he added. Secunia, however, rated the vulnerability as "highly critical," a rank of "four" on a five-number scale of increasing severity. The WMF format proved problematic for OpenOffice.org's rival in 2006. After pressure from its customers, Microsoft issued an out-of-cycle patch early last year for its operating systems after widespread attempts to exploit a WMF vulnerability. The flaw — one of the top security problems of 2006 — also left Windows systems vulnerable to running code if a malicious WMF was opened.

Rogue Wave Software Adds New Partners for Rogue Wave Hydra Suite
Staff, SOA WebServices News Desk
Rogue Wave Software announced that it has entered into a partnership agreement with CIBER, Inc. granting CIBER the ability to provide integration and implementation services for Rogue Wave Hydra including support, consulting and training. This marks the seventh partnership agreement completed to extend the sale of Rogue Wave Software's high performance Service Oriented Architecture (SOA) framework, Rogue Wave Hydra, and related components. Rogue Wave Hydra empowers IT architects and professional developers to achieve order-of-magnitude performance and throughput improvements for critical software applications. Rogue Wave Hydra is based on Rogue Wave Software's pioneering 'Software Pipelines' technology and associated methodology, which focuses on achieving efficiency and scalability through concurrent computing and parallel processing. Software Pipelines allow for efficient execution and distribution of software components or services for concurrent processing on available resources. This peer-to-peer architecture minimizes bottlenecks and allows businesses to achieve new levels of throughput and performance. Rogue Wave Hydra also supports the Service Component Architecture or SCA specification and is the first high- performance SOA development framework that complements key concepts of the SCA architecture, including cross-platform components, tightly-and loosely-coupled service components, Web service standards, BPEL and Service Data Objects (SDO). Future releases of Rogue Wave Hydra will add further support for SCA application programming interfaces, or APIs, and will be compatible with future products from other vendors, while providing specialized capabilities for high-performance application requirements.
See also: HydraSDO

Analyst Predicts Eightfold Increase in New Storage Capacity by 2012
Chris Preimesberger, eWEEK
Data storage analyst and consultant Coughlin Associates will reveal a survey report Jan. 6 at the Storage Visions conference in Las Vegas that predicts an eightfold increase in new digital storage capacity and the doubling of storage-related revenues over the next six years. The Atascadero, Calif.-based firm's 130-plus-page, fourth annual report on data storage and the entertainment market — the 2007 Entertainment Content Creation and Digital Storage Report — indicates that the strong growth in digital storage demand is driven by higher-resolution content creation and distribution as well as archiving and digital preservation. The report analyzes requirements and trends in worldwide data storage for entertainment content acquisition; editing; archiving and digital preservation; as well as digital cinema, broadcast, satellite, cable, network and VOD distribution. Capacity and performance trends are presented and media projections are made for each of the various market segments, a Coughlin spokesperson said. Industry storage capacity and revenue projections include direct attached storage, on-line as well as near-line network storage. Market share for content creation storage hardware for these three categories of storage systems are given for 2006. About 54 percent of the total storage capacity was used for content archiving and preservation in 2006. This is expected to increase to 72 percent by 2012.


XML.org is an OASIS Information Channel sponsored by BEA Systems, Inc., IBM Corporation, Innodata Isogen, SAP AG and Sun Microsystems, Inc.

Use http://www.oasis-open.org/mlmanage to unsubscribe or change an email address. See http://xml.org/xml/news_market.shtml for the list archives.


Bottom Gear Image