[go: up one dir, main page]

AU2004237874A1 - Method for Specifying Data for the Assembly of a Document Set - Google Patents

Method for Specifying Data for the Assembly of a Document Set Download PDF

Info

Publication number
AU2004237874A1
AU2004237874A1 AU2004237874A AU2004237874A AU2004237874A1 AU 2004237874 A1 AU2004237874 A1 AU 2004237874A1 AU 2004237874 A AU2004237874 A AU 2004237874A AU 2004237874 A AU2004237874 A AU 2004237874A AU 2004237874 A1 AU2004237874 A1 AU 2004237874A1
Authority
AU
Australia
Prior art keywords
data
view
user
data set
ordered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2004237874A
Inventor
Andrew John Whitfield King
Alison Joan Lennon
Andrew James Lo
Timothy Merrick Long
Alan Valev Tonisson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2003907199A external-priority patent/AU2003907199A0/en
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2004237874A priority Critical patent/AU2004237874A1/en
Publication of AU2004237874A1 publication Critical patent/AU2004237874A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Description

I
S&F Ref: 693292
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD
PATENT
Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3chome, Ohta-ku, Tokyo, 146, Japan Alison Joan Lennon Alan Valev Tonisson Timothy Merrick Long Andrew James Lo Andrew John Whitfield King Spruson Ferguson St Martins Tower Level 31 Market Street Sydney NSW 2000 (CCN 3710000177) Method for Specifying Data for the Assembly of a Document Set ASSOCIATED PROVISIONAL APPLICATION
DETAILS
[33] Country [31] Applic. No(s) AU 2003907199 [32] Application Date 23 Dec 2003 The following statement is a full description of this invention, including the best method of performing it known to me/us:- 5815c o METHOD FOR SPECIFYING DATA FOR THE ASSEMBLY OF A DOCUMENT SET O Copyright Notice This patent specification contains material that is subject to copyright protection.
0 The copyright owner has no objection to the reproduction of this patent specification or 00 Cc related materials from associated patent office files for the purposes of review, but otherwise reserves all copyright whatsoever.
Technical Field of the Invention The present invention relates to the specification of component data for documents in a computer environment. In particular, the present invention relates to the specification of variable data for computer applications such as variable data printing.
Background Variable data printing (VDP) refers to the generation of a set of documents for printing where typically each document of the set is assembled from a combination of static and variable data. Variable data printing applications generally involve the creation of a document template (or master) containing the static information to be shared by all documents of the document set and slots for variable data, the variable data typically varying for each document in the document set. The process of creating the document set involves instantiating the variable data for each of the variable data slots of the document template. The resulting set of documents can represent a customised or personalised set of documents and is often used for marketing or customer relations purposes. Variable data printing is also often referred to as variable information printing.
Variable data printing applications have their origins in text-based mail merge functions associated with earlier word processing systems. Typically text-based mail merge functions involved the creation of a set of text documents from a document 693292 O -2o template (authored in the word processing application) and variable data (text) which had d Sbeen stored in a text file. Each document of the created set of documents was generated by instantiating variable text slots in the document template with a record, or row, of data in the text file. The result of this process was a customised or personalised, set of 00 00 5 documents that could be printed. An example of such a system was the supporting MailMerge application of the word processing package WordStar 3.0 (Micropro SInternational, Inc.) that was available in the 1980s.
Variable data printing applications differ from the mail merge features of past and existing applications in that they typically allow users to specify graphical images and generated graphics), as well as text, variable data. Slots in a document template can be associated with graphical information customised graphs of electricity usage over the past twelve months). This has meant that the set of documents that are generated from a VDP application can represent glossy customised sales brochures rather than the customised letter targets of mail merge systems. However VDP applications share with their precursor mail merge applications a need to be able to specify the variable data, whether it be text or graphical.
The mail merge features of recent word processing applications, such as Microsoft WordT 2002 (Microsoft Corp.), still work substantially as described for earlier systems.
However the more recent systems have eliminated the need to setup the variable data for a particular document set in a text file. For example, in Microsoft WordTM 2002 the user can specify variable data stored in a variety of other data sources including Microsoft Access T M tables, Microsoft ExcelTM worksheets, DBase T M files and Microsoft FoxPro T M files. Once the user has selected a data source, the user must then specify the correct variable data for a slot of a document template by selecting the correct field name from a 693292 -3- U displayed list of available fields. If variable data is being selected from a Microsoft SExcel T M worksheet, then the column names of the worksheet are displayed for the user to select. In the case of a database, such as Microsoft Access, these field names are the names of columns of an identified table and have generally been previously assigned by a 00 r 5 database administrator.
N Most VDP applications PrintShop Mail by Atlas Software BV and Pageflex Persona by Electronics for Imaging) have focussed on providing an application where the user can design a document template to be used to produce the printed set of documents.
In general, the step of deciding what variable data is to be used for each of the variable data slots is achieved by the user selecting a particular table stored in a database and then selecting the appropriate data by clicking on the required field name from a list of available names. For example, in PrintShop Mail Version 4.1 for Windows (Atlas Software BV), the association of a particular column of data from a table can be achieved by the user selecting and then dragging the column name, from a list of available column names, to a desired location in the document template or an existing variable data slot.
This is substantially the same process that existing mail-merge-capable applications like Microsoft Word T M 2002 use to specify the variable data.
The selection of variable data by column or field name requires the user to be able to decide on the correct field name based on its name. Column names are often very terse due to limitations of database management systems. Also, the column names of worksheets are often brief in order to keep the columns of the worksheet to a manageable width for display and printing.
The VDP application, Vitesse T M (Elixir Technologies Corp), allows users to specify variable data by selecting areas of a line data file (a serialised text file generated 693292 -4o from, for example, a database). The user can highlight for selection data which is displayed as part of an unstructured text block. The line data is not displayed with data sets identified for the purposes of selection and therefore the user must use a mouse or other pointing device to highlight the area of text containing the desired variable data.
00
T
However, where column names are available in a data source, the Vitesse application Nresorts to the typical approach of having the user specify the variable data from a list of column or field names.
There is generally little opportunity in existing VDP applications for the user to preview the data in a data source table, worksheet, etc.), transform the data to more appropriate forms consistent use of upper and lower case text) for a current VDP job, to create new data views (possibly involving joins over heterogeneous databases) or create new data components (combinations of two or more stored data fields). Most current VDP applications rely on these data preparation steps being performed as a separate step in the process. Typically the VDP application requires the variable data to exist in a pre-determined view of the data.
Due to increased printing speeds, users now contemplate much larger VDP jobs.
Therefore, there is an increased need to ensure that the variable data is in the correct format for the job, as large amounts of wastage can occur if data are printed incorrectly.
For example, if the names and addresses of customers have been setup in a database table without care to ensure case consistency in the data then, especially for large graphical VDP jobs, it is useful to be able to check the data before commencing the run. Some VDP applications allow the user to preview the run by scrolling though the instantiated document set, however for large VDP jobs this is time-consuming.
693292 O o Often it is necessary in VDP to be able to specify variable data where data from d more than one row or record is instantiated in each document of the document set. For example, it may be necessary to specify, in a letter to the customer, all the products that a customer has purchased over the past twelve months. The data for this particular VDP 00 job may be stored in a database table with a table row being used to store each (Ni customer/product pair there is a one to n relationship between customer and product Sin the database table). The user needs to be able to specify that a letter is to be produced for each customer, but there may be up to, say, three possible different product names included on each letter. This means that up to three individual rows of the customer/product table may be required to produce a single document in the document set when the variable data is instantiated. Most current solutions for this multi-record feature require the user to reformat the data so that there is a single record or row in a table for each document in the document set. So, in the example where there are n products listed for each customer, the VDP user must typically create a new table with m columns for products, where m n. There is a further problem when data is missing. For example, an error can result if a customer only has two listed products when three are expected.
Some existing VDP applications allow users to specify variable data to be presented as graphs in the resulting set of documents. For example, DesignMerge T M (Banta Integrated Media) and the DL Formatter T m (Datalogics, Inc.) allow users to specify variable data indicated from names of database table columns) to be presented as graphs in the resulting document set. However, this feature typically requires the user to design the format of the graphs in the preparation phase bar or line chart, bar colours, axis names, etc.). This preparation process can be very tedious and successful results require that the variable data must be in the correct form 693292 O -6o numeric, able to be graphed against a specified x-axis). Failure of the stored variable data Sto conform to the correct format typically results in printed documents which must be discarded.
The problem of specifying variable data for slots on a master document is not 00 limited to the generation of customised printed letters or brochures. It must also be addressed by applications which provide forms to display database data. US Patent SNo. 5,995,985 (Cai) issued November 30, 1999 describes a method for generating formatted output labels) in which the user is assisted with the mapping (specifying) of card file or database data onto slots of a form displayed on a computer screen. As with the previously-mentioned mail merge and VDP examples, the user must select the required variable data by selecting the name of a field from an existing electronic card file system. The resulting formatted output may be directed to a printer or just used as a means of viewing the card file (or indeed database) data on a computer screen.
Thus the current methods of specifying variable data for the purposes of assembling a document set derived from the variable data generally rely on the user being able to make the selection based on a column or field name. In addition, the current methods of specifying variable data where data from more than one row or record is included in each document of the document set usually involve reformatting of the data.
Finally, existing methods for specifying graphical data require the user to provide presentation information for each graphical object specification and are intolerant to varying data formats in the repeated data structures used to instantiate the documents of the resulting document set.
Summary of the Invention 693292 O -7- O It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements for specifying variable data for the assembly of a set of documents.
In accordance with one aspect of the present invention, there is disclosed a method 00 of associating an ordered data set with at least one slot in a document template, said Smethod comprising the steps of: displaying a representation of said document template; displaying a view of data, said view of data identifying at least one ordered data set available for selection; detecting a selection of an ordered data set from said displayed view of data, said ordered data set comprising one or more data members; and associating said selected ordered data set with said at least one slot of said template.
In accordance with another aspect of the present invention, there is disclosed a method of associating an ordered subset of a data set with at least one slot in a document template, said method comprising the steps of: displaying a representation of the document template; displaying a view of data, said view of data identifying at least one ordered data set available for selection, at least one member of said at least one ordered data set having a many-to-one relationship with a corresponding member of a master ordered data set; detecting a selection of a member of said at least one ordered data set from said displayed view of data; 693292 O -8- S(d) associating said selected member with said at least one slot of said d document template, wherein said associating defines an ordered subset of said at least one ordered data set, said subset being represented by said selected member and having a oneto-one correspondence with said master ordered data set.
00 5 In accordance with another aspect of the present invention, there is disclosed a Smethod of assembling a set of documents from a document template, said method comprising the steps of: displaying a representation of said document template; displaying a view of data, said view of data identifying at least one ordered data set available for selection; detecting a selection of at least one ordered data set from said displayed view of data, said ordered data set comprising one or more data members; associating said selected ordered data set with said at least one slot of said template; and assembling a set of documents from said document template and said associated ordered data set.
Other aspects of the invention, including computer program and readable media and apparatus are also disclosed.
Brief Description of the Drawings At least one embodiment of the present invention will now be described with reference to the drawings and appendix, in which: Fig. 1 is a block diagram showing the operating environment of the arrangements described herein; 693292 O -9- U Fig. 2 is a schematic that shows how source components of a schema view are mapped into a target view component; Fig. 3 is a flow chart that describes the process of creating a schema view for a user; 00 Fig. 4 is a flow chart that described how a schema view is formed using data represented using XML Schema; SFig. 5 is a flow chart that describes the mapping creation process; Fig. 6 is a schematic showing a typical screen layout used for creating a schema view over selected data sources; Figs. 7A to 7C schematically depict screen layouts used for creating a new mapping; Fig. 8 is a flow chart describing the process used one arrangement to infer transforms using an example that a user has edited; Fig. 9 is intentionally blank; Fig. 10OA, 10B and 10OC schematically illustrate how a mapping can affect a schema view; Fig. 11 is schematic block diagram representation of a computer system which may be used in the described arrangements; Figs. 12A-12F shows an example implementation of creating a new data view using a graphical user interface (GUI); Figs. 13A to 13C shows a series of GUIs by which a user can define a new transformation or combination operation; Figs. 14 to Fig. 30 are intentionally absent; Fig. 31A depicts the process of presenting a data view; 693292 o Fig. 31B is a flowchart of the display selection method; d SFig. 32 depicts example XML data to be presented; Fig. 33 is an example of a base table data structure; Fig. 34 is an example of a base table data structure with hyperlinks; 00 Fig. 35 is an example of a table display type; N Fig. 36 is an example of a transposed table display type; SFig. 37 is an example of a row-wise line graph; Fig. 38 is an example of a column-wise bar graph; Fig. 39 is an example of a row-wise pie graph; Fig. 40 is an example of a row-wise xy plot; Fig. 41 is a base table display corresponding to the xy plot of Fig. Fig. 42 is an example of a 2D grid display type; Fig. 43 is a table display corresponding to the 2D grid shown in Fig.42; Fig. 44 is another example XML data; Fig. 45 is a fully expanded base table data structure of the XML tree in Fig. 44; Fig. 46 is a base table data structure of the XML tree in Fig. 44 with hyperlinks; Fig. 47 is a flowchart of the flat data table construction procedure; Fig.48 is a flowchart of the analysis phase of the data view presentation process; Fig. 49 is a flow chart of the elimination phase of the data view presentation process; Fig. 50 is a flowchart of item 4920 of Fig. 49; Fig. 51 is an example of a directed graph with ambiguous preference relations used in the presentation process; 693292 -11o Fig. 52 is a directed graph obtained after ambiguous preference relations are d removed from Fig. 51; Fig. 53 is a flowchart of the preference phase of the data view presentation process; 00 5 Fig. 54 is a flowchart of the process of creating new data views using existing (Ni Squery data; SFig. 55 is a flowchart of the process of adding a data set to an existing data view; Fig. 56 is a flowchart detailing the process of updating the query tree indicated by step 5535 of Fig. Fig. 57 is a flowchart of the process for determining a loop variable for a data set iterator as indicated by step 5525 of Fig. Fig. 58 is an example query tree used to describe the source data view of an example data manipulation process; Fig. 59 is an example query tree used to describe the target data view of an example data manipulation process; Fig. 60 is a flowchart of the process of updating a target query's iteration operations for the distinct-union join method; Fig. 61 is a flowchart of the process of updating a target query's iteration operations for the inner and outer join methods; Fig. 62 is a flowchart of the process of updating a filter for a target query; Fig. 63 is an example of a query tree having a specified query sort order; Fig. 64 is a flowchart of the process of hiding a data component; Fig. 65 is a flowchart depicting a method of specifying data for the assembly of a document set; 693292 -12o Fig. 66 is a flowchart depicting an alternate method of specifying data for the d Sassembly of a document set; Fig. 67 is a flowchart depicting the preferred embodiment of a further alternate method of specifying data for the assembly of a document set; 00 S 5 Fig. 68 is an example GUI for the specification of data for variable data printing; SFig. 69 is an example document template for use in variable data printing; SFig. 70 is an example table data view for use in variable data printing; Fig. 71 is an example document generated from the document template shown in Fig. 69 and the variable data shown in Fig. 70; and Appendix A is an XML Schema example of a preferred serialisation syntax for data view definition documents.
Detailed Description including Best Mode 1. Overview Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
The arrangements described herein are done so with respect to the Internet which represents a distributed system of heterogenous data sources. In this information space, valuable data is stored in database systems (proprietary, legacy and open source) and in structured documents (eg. HTML/XML documents). The arrangements described operate to unify this information space by normalising all information in uniform resource identifier (URI) space. This means that each atom of data is ultimately addressable by a URI. In addition, data from the data sources is communicated using Extensible Markup 693292 O -13o Language (XML) and the schemas of the data sources are represented using XML Schema. The adoption of these Web standards serves to notationally normalise the data, however the problem of semantic heterogeneity remains.
The arrangements described may also be realised using other systems having 00 heterogeneous data sources. For example, an Intranet system having data stored in 1 IXTM t 4 various sources such as UNIX text files, OracleT or Microsoft Access T database C systems, and other proprietary or legacy database systems, may also be used to implement embodiments of the present invention.
Referring to Fig.1, the described arrangements may be practised as part of a data browsing application 120, that is executed as a software application on a local computer 100 connected to an intranet or the Internet 101. The data browsing application 120 communicates with any number of distributed heterogeneous data sources via the Internet 101. The data sources may be Oracle databases (eg. 150), Sybase databases (eg.
151), simple textual data (eg. 152) such as a Unix file or collections of XML documents (eg. 153). Each data source 150 -153 has associated therewith a corresponding data server 140, 141, 142, and 143 that communicates with the data browsing application 120.
The data servers 140 143 represent processes that are identified by a URI, which accept requests using the HTTP protocol from the data browsing application 120, and return data in the form of XML. The requests can be formulated using an XPath expression, which is appended to the URI of the data server as a query string. XPath is a W3C Recommendation (see http://www.w3.org/TR/Path). Preferably, the requests are expressed using a richer query language such as the emerging W3C standard, XQuery.
XQuery is a query language (see http://www.w3.org/XML/Query) that uses the structure of XML to express queries across all these kinds of heterogeneous data, whether the data 693292 -14- Sis physically stored in XML or viewed as XML via some middleware such as a data server. In an alternative implementation, the requests can be passed in the body of the HTTP request using XML messaging protocols such as SOAP).
In an alternative arrangement the data browsing application 120 can directly access 00 5 web-accessible XML document data sources without the need for a data server. These (Ni data sources may be local or accessed via the Internet. Queries directed at these XML Sdocument data sources are processed by the data browsing application 120.
The data browsing application 120 preferably has access to a database 130 within the local computer 100 that stores URIs of interest to the user data source URIs), as well as mapping information required to transform data from the data sources into the view desired by the user. The database 130 can also act as a cache for data obtained from heterogeneous data sources and relevant schemas. The local database can include heterogeneous forms of storage including the registry in Windows T M (Microsoft Corp.) implementations and various text file formats. The data browsing application 120 may also access local data sources 131, such as local XML documents and/or other local databases.
The data browsing application 120 receives XML data (an XML document) in response to data source requests. This XML document is an hierarchical tree structure comprising a root element with possibly sub-elements, each of which may in turn comprise sub-elements of its own. Each element in an XML tree is identified by a name.
Optionally associated with each element of an XML tree is a general text string referred to as the text value of the element. This is typically true for leaf elements of the tree, that is, elements containing no sub-elements, but may also be true for non-leaf elements. Also 693292 U optionally associated with each element is one or more attributes, each identified by an attribute name and associated with an attribute value in the form of a general text string.
Special hyperlink attributes may also be present in the XML data, the targets of which can be entities such as external files, an XML element residing in the same or 00 r 5 another XML document structure, or further data source requests. The latter type of (Ni hyperlink can enable a user to use the data browsing application 120 to browse through a data source, with XML data being presented to the user with each browsing step. Data servers can include return hyperlinks in their generated XML data.
The data browsing application 120 automatically selects the most appropriate display types for the XML data at each browsing step. These display types include tree, table, bar and line graph, xy scatter plots, and 2D grids. The method of selecting the most appropriate display types is described in Section 5. The result of this presentation step represents a view of the data. The user can effect presentation changes to this view of data and save the resulting view of data for future use. Saved views of data can act like data sources. They are associated with a query and when a user selects to present a view of data, the query is executed. This results in an XML document which is presented as described in Section The data browsing application 120 also enables users to create new views of data from existing views by manipulating displayed data in a graphical user interface (GUI).
This process is described further in Sections 6 and 7. The method of creating new views of data can use recommending services to introduce previously unknown sources of data to the user (see Sections 8, 9 and Finally the data browsing application 120 enables users to personalise their view of data by creating mappings which serve to map data from data sources of interest to a 693292 O 16- O form more understandable by the user. These mappings can be stored for re-use and exchanged with other users. The method of creating new mappings is described in Sections 3 and 4. The process of exchanging sets of mappings is described in Section 11.
The data browsing application 120 in Fig. 1 can alternatively be implemented as 00oO a client server application. In this case, a single instance of the server application may run on a corporate Intranet and users may use a client to access this server. This N alternative implementation has the advantages that XML document and schema caches can serve the organisation and data is not duplicated over many different installations on the Intranet. The client of such a client-server implementation can be implemented within a commonly-used Web browser such as Netscape Navigator T M (Netscape Corp.) or Internet Explorer T M (Microsoft Corp.).
The methods described herein are preferably practiced using a general-purpose computer system 1100, such as that shown in Fig. 11 wherein the processes of Figs. 1 to 64 may be implemented as software, such as an application program executing within the computer system 1100. In this regard, the computer 1100 may be configured to operate as the local computer 100, or as required, as one of the servers 150 153. The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer from the computer readable medium, and then executed by the computer. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer preferably effects an advantageous apparatus for the methods described herein.
The computer system 1100 comprises a computer module 1101, input devices such as a keyboard 1102 and mouse 1103, output devices including a printer 1115 and a 693292 O-17- Sdisplay device 1114. A Modulator-Demodulator (Modem) transceiver device 1116 is used by the computer module 1101 for communicating to and from a communications network 1120, for example connectable via a telephone line 1121 or other functional medium. The modem 1116 can be used to obtain access to the Internet 101, and other 00 r- 5 network systems, such as a Local Area Network (LAN) or a Wide Area Network (WAN).
C,
The computer module 1101 typically includes at least one processor unit 1105, a memory unit 1106, for example formed from semiconductor random access memory (RAM) and read only memory (ROM), input/output interfaces including a video interface 1107, and an 1/O interface 1113 for the keyboard 1102 and mouse 1103 and optionally a joystick (not illustrated), and an interface 1108 for the modem 1116. A storage device 1109 is provided and typically includes a hard disk drive 1110 and a floppy disk drive 1111. A magnetic tape drive (not illustrated) may also be used. A CD- ROM drive 1112 is typically provided as a non-volatile source of data. The components 1105 to 1113 of the computer module 1101, typically communicate via an interconnected bus 1104 and in a manner which results in a conventional mode of operation of the computer system 1100 known to those in the relevant art. Examples of computers on which the described arrangements can be practised include IBM-PCs and compatibles, Sun SPARCstation's or alike computer systems evolved therefrom.
Typically, the application program is resident on the hard disk drive 1110 and read and controlled in its execution by the processor 1105. Intermediate storage of the program and any data fetched from the network 1120 may be accomplished using the semiconductor memory 1106, possibly in concert with the hard disk drive 1110. In some instances, the application program may be supplied to the user encoded on a CD-ROM or floppy disk and read via the corresponding drive 1112 or 1111, or alternatively may be 693292 O -18o read by the user from the network 1120 via the modem device 1116. Still further, the d Ssoftware can also be loaded into the computer system 1100 from other computer readable medium including magnetic tape, a ROM or integrated circuit, a magneto-optical disk, a radio or infra-red transmission channel between the computer module 1101 and another 00 device, a computer readable card such as a PCMCIA card, and the Internet and Intranets (Ni Sincluding e-mail transmissions and information recorded on Web sites and the like. The Sforegoing is merely exemplary of relevant computer readable media. Other computer readable media may alternately be used.
Some of the herein described methods may alternatively be implemented in dedicated hardware such as one or more integrated circuits. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
2. Data Components and Views In the following sections, the term data component will be used in a general sense to refer to an identifiable unit of data. In the preferred arrangement, this unit of data corresponds to an identified XML element or attribute. If an XML Schema exists for the data, then a data component should be able to be associated with either an element or attribute declaration (and definition). The name of the data component is taken to be the name of the XML element or attribute.
A data node is a data component, which corresponds to a uniquely-identified XML node. A data node can be identified by a single XPath expression which evaluates to a single node in an XML tree. Alternatively, a particular element can exist as part of a repeated structure in an XML document. For example, in the XML document fragment indicated below having elements A, B and C, 693292 O-19o U<A> 0 etc., the element B occurs within the repeated structure of A. If all the B elements were to be 5 extracted from the XML document fragment and presented, for example, as a column of a 00 C table, the collection of data is referred to as a data set. A data set can be identified by an iterator and a path (XPath expression) relative to the iterator. In the case of the above Nexample, B is a data set having an iterator and a path relative to the iterator, Furthermore if B represented a numeric value was quantifiable), then the element B could also act as a data series (with respect to element In other words B could be graphed with respect to C. A data series is a specialisation of a data set.
Identification of a data series requires an iterator (as for data set), a path (as for data set) and also a label or independent axis x-axis) path relative data set's iterator. So, in the case of the above example, if B was numeric it could also act as a data series having a label path, Alternative implementations could allow the label of a data series to be a further data set without departing from the scope of this disclosure. However, when the iterator for the data series and its independent data set differ, extra knowledge is required to infer the correspondence between the independent and dependent data sets.
Data nodes, data sets and data series can be considered specialisations of a data component because each entity is still associated with a particular element declaration.
So, in the following description the term data component will be used in the description when a process is described in general terms. However, the appropriate term will be used when specific examples are described. For example, if the process of copying columns of 693292 O 0 a table display type is being described then the table column will be referred to as a data set.
The data browsing application 120 allows users to create personalised views of data over data sources that are of interest to them. A personalised view of data will 00oO subsequently be referred to as a "data view". The personalisation refers to the possible Nuse of one or more mappings. A mapping serves to map data from the data sources of Sinterest into a form desired by the user in a data view. In other words, a mapping defines how one or more data components from one or more data sources are renamed, transformed or combined into a new target data component that is part of a data view.
Preferably, the new target data component is more meaningful to the user than the unmapped (source) data component(s). The target data components preferably exist in a unique namespace that is created to hold the mappings created by the user. The source data components of a mapping may exist in any referenced namespace and their definitions may be stored in any schema, which can be located over the Internet. This mapping process is depicted in Fig. 2.
For example, a user may create a target data component called MyName. This target component may have a mapping that takes the source data components SecondName and FirstName from a namespace such as http://www.example.com/abc, represent them in the form SecondName, FirstName, and then convert the resulting data component to upper case. In other words, the user would see data of the form, "SMITH, JOHN" as being an instance of their target data component, MyName. The user can specify more than one mapping for any one target data component. The user can also specify whether the source data components used by the mapping should be removed from the user's view of the data. In the above example, 693292 -21such may be desirable because the user may not want to see MyName, SecondName and FirstName in his/her data view.
The objective of creating new target data components, which may combine, replace or modify existing source data components, is to provide a more understandable and consistent view of data to a user. In other words, the defined mappings can be used to provide a view of data to the user without the user being aware of the data transformations occurring. The data view, with which the user ultimately interacts, is similar to a view constructed by a database administrator which may perform a join across two or more tables in a relational database. The data view however differs in three main ways: the data view can effectively provide a join across two or more heterogeneous data sources; the data view may contain new (mapped) data components which are derived from data components defined to exist for the data sources; and the data view may contain presentation specifications for data components.
A data view can be understood as a "rich" query, being essentially a query that can join data from different data sources, and effect naming and data transformations on the source data and enforce data-specific presentation characteristics. In the data browsing application 120 a data view is defined by a data view definition. This definition contains an XQuery expression which specifies how data is obtained for the data view.
The definition can also contain other information for the data view exported mappings, presentation rules, properties of the data view, etc.). Data view definitions are described in more detail in Section 11. In its simplest form a data view definition is 693292 -22o simply an XQuery expression which can be appended to the URI of a data source as a query string.
Data views are dynamically generated from live data. In other words, the data browsing application 120 does not store or warehouse data views which may derive their oO 00 5 data from more than one data source or may require transformations of the data. When a N user selects to view a data view in the data browsing application 120, the data view's Squery is executed. This results in data being dynamically collected from one or more data sources, appropriately mapped and presented.
As mentioned above, database administrators have traditionally been responsible for creating views of data, using tools which enable them to work with definitions of tables and their contained data fields and relationships. One way for a user to create a personalised data view is to interact with the schemas (or data dictionaries) of the data sources of interest. These schemas shows the classes of data contained within data sources of interest and relationships between the data.
In the data browsing application 120, a "schema view" is used to represent the schemas of one or more data sources. A schema view shows the classes of data contained within the data sources and relationships between the data. Unlike a data view, a schema view does not contain instance data. The schema view is conceptually similar to the graphical representation of tables and their associated columns of a relational database management system Microsoft AccessT).
The schema view displays the classes of data in an hierarchical fashion consistent with the XML form of the data which is received by the data browsing application 120.
Preferably, the classes of data and their inter-relationships, are defined using the W3C Recommendation, XML Schema (see http://www.w3.org/XML/Schema). This means 693292 23 0 that if data in a data source is stored in a set of relational tables, the schema view of that data source would be derived from the XML schema definitions of the data source and therefore would be essentially hierarchical in nature. The function of a schema view is to Sshow the user classes of data from which a new data view can be constructed.
00 Referring now to Fig. 3, the method of displaying a schema view over selected data sources is now described. The creation of the mappings used by this process will be
O
described in Sections 3 and 4. A schema view is preferably displayed when a user wishes to construct a new data view. Schema views displayed by the data browsing application 120 are dynamic and usually partial, in that they depend on those data sources the user has selected. On commencing a session in the data browsing application 120, a user can be automatically associated by the data browsing application 120 with a set of mappings. These can be considered part of a user's working environment or application settings. Alternatively, execution of the data browsing application 120 enables the user to select a set of mappings to use, as depicted in step 200 of Fig. 3. The data browsing application 120 then enables the user in step 202 to select a number of data sources in which the user is interested. The data browsing application 120 then identifies, in step 204, the schema definitions for the data components contained in selected data sources and forms an initial schema view over the sources from those schema definitions.
Referring now to Fig. 4, step 204 is described in further detail. After the user has selected a data source in step 202, the data browsing application 120 identifies the XML element associated with that data source in step 302. In step 304, the system attempts to locate an XML schema definition for that element. This requires searching for a definition in the namespace defined for the element. In the preferred arrangement, this search is performed by first identifying all the schema documents that have been 693292 O -24- Sencountered for that namespace. These schema documents may have been encountered Sby way of XML schema schemaLocation hints provided in XML documents or other schemas. The encountered schemas are preferably stored in the local cache of the data browsing application 120, for example, within the memory 1106 of the local computer 00 5 system 100. Alternatively, the encountered schemas can be fetched across the e¢3 N Internet 101 and re-parsed when required. If a definition for the element is located, then Sthe data browsing application 120 attempts in step 306 to recursively locate, for that definition, all the possible child element definitions and attribute definitions. Preferably attribute definitions are differentiated from child element definitions by colour in the displayed schema view. Alternatively, the names of attributes can be prefaced by a meaningful symbol, such as The located definitions are then represented as a tree structure in step 308. This tree structure forms the initial schema view of step 204 in Fig. 3. The sub-routine of step 204 then concludes at step 310.
Returning now to Fig. 3, the mappings associated with the identified mapping set are then processed. The first mapping of the set is selected in step 206. The data browsing application 120 in step 208 checks that all source data components required by the mapping exist in the current schema view. If they do, then control passes to step 214, where the mapping is applied. This involves creating a definition for the target data component in the current schema view and, if required, removing some or all of the associated source data components definitions from the schema view. In the preferred arrangement, created target data component definitions are highlighted from the native schema component definitions in the schema view using display colour. This is not essential, and need only be implemented in order to make it clear to the user which component definitions are derived from the mappings.
693292 O O It should be noted that a mapping can be applied to both a schema view and a data view. When a mapping is applied to a schema view, the result is a definition created in the schema view for the target data component, and definitions for the one or more source data components of the mapping optionally removed from the schema view.
00 When a mapping is applied to the data view, the data components corresponding to the (Ni one or more source data components of the mapping are transformed, according to the mapping, to a data component corresponding to the target data component of the mapping.
Once a mapping is processed, the schema view is updated in step 216.
Preferably, the updated schema view is displayed to the user by way of the display 1114, however it is also possible to only display the updated schema view when all mappings associated with the selected mapping set have been processed. On completion of step 216, the data browsing application 120 checks whether there are any more mappings to process in step 210, and if so the next mapping is retrieved in step 212 and control returns to step 208. If in step 208, definitions for all the source data components required by the mapping were not in the current schema view, then the mapping is not processed and control passes to step 210. When there are no more mappings, the procedure concludes at step 220.
The procedure described above with reference to Fig. 3, can be achieved using a user interface, an example of which is shown in Fig. 6. Fig. 6 shows a graphical user interface (GUI) image 600, which may be reproduced by the display device 1114, and at the top of which, the user is presented with a list 601 of his/her commonly-used data sources. The user can select one or more of these data sources, for example by manipulating the mouse pointer 1103, with the selected data sources being highlighted.
693292 O -26o In this example, selection is highlighted by the data source being enclosed by a box. With Seach selection, a panel 602 arranged below the list 601 is immediately updated with a constructed schema view formed using the process described above with respect to Fig. 3.
Preferably, the user can navigate through the schema view panel 602 expanding and 00 collapsing the indicia for data component definitions as desired. The indicia used to represent data component definitions are preferably derived from the names of elements.
SHowever some other element information, for example the documentation nodes associated with the element in the schema, could also be used to represent the data component in the schema view.
The schema view constructed using the process shown in Fig. 3 can be used to collect constraints for a new data view across the selected data sources. The constraints may be collected for combination in the schema view in either a conjunctive or disjunctive manner. The dynamically-constructed schema view enables a data view to be specified in terms of the user's mappings. When a data view is to be presented to the user, in order to obtain the source data the mappings must be decomposed into source data components by inverting mappings where possible. In some cases, it is necessary to pass the responsibility for some of the mapping inversion to the data server(s). For example, if a target data component, X, is defined to be the concatenation of the string source data components, A, B and C, and the user enters the constraint X "Hello, Mr Jones", then it is difficult to efficiently invert the constraint remote from the data. In the preferred arrangement, if the source data components are from a single data source, then a "LET" clause of an XQuery request is used where possible to define a variable for X so that the constraint on X can be used directly.
693292 O 27- U Clearly, this solution is only possible where all the source data components arise d Sfrom one data server. In cases where inversion at the XQuery formulation is not possible, the data browsing application 120 must process the constraint after receipt of the source data. However, where possible constraints on mapped components are inverted before 00 5 queries are passed to the data servers.
In the preferred arrangement, as data is returned from the data servers 140- 143 Sin response to a query, the data is transformed by the data browsing application 120 according to the transformations defined by relevant mappings. This means that the data is presented to the user in terms of the user's mappings.
A constructed data view can be exchanged with other users. In this event, any of the mappings used by the data view at creation time must be serialised and packaged with the definition of the data view. When a new user receives a data view from another user, the serialised mappings are used to ensure that the data view appears as it was created. It is also possible for the new user to import mappings contained in the data view into the new user's own set of mappings. This process of sharing data views and importing mappings is discussed in more detail in Section 11.
3. Interactively Defining Mappings The mapping creation process can now described with reference to the flowchart of Fig. 5. The process may be implemented as a separate application program executed by the processor 105 within the local computer system 100. In the preferred arrangement, the process is incorporated in the data browsing application 120. At step 500, the user selects the data sources from which data component definitions are to be selected as source data components for new mappings. A schema view for the selected data sources is constructed, as described using Fig. 3, and displayed to the user in step 502. The 693292 -28- O schema view is preferably displayed as a tree, in which the user can expand and collapse nodes in a similar way to that of Fig. 6. Each data component definition of this initial schema view is represented by an indicium that can be selected by the user. In its simplest form, this indicium is just the element name of the data component definition as 00 5 described above. The user then indicates that a new target data component is to be Screated in step 504. In step 506, the user is enabled to select an indicium from the schema Cview to indicate that the associated data component definition is to be involved in the mapping. Thus, for example, if a target data component is to be created to be the concatenation of two or more source data components, the user may double-click on the indicium representing the first source data component definition for the concatenation.
The type definition for the target data component defaults to that of the initialising source data component. Preferably, the type information is represented by an XML Schema type definition. In the arrangement of Fig. 5, steps 504 and 506 are performed as a single action for the initialising source data component.
When the initialising source data component definition is selected, a GUI window 700 such as shown in Fig. 7A is displayed on the display 1114. On initial display, the GUI window 700 shows a target data component having the same name as the initialising source data component. If a component with this name already exists in the user's namespace, then the user is asked if he/she wishes to add a further mapping to this (target) name. If the user confirms this, then no action is required. If the user responds to the prompt with then the focus is set to the target name and the user is required to alter the name appropriately.
In the case of the example GUI window depicted in Fig. 7A the user has selected to change the target name from SecondName to MyName.
693292 O -29o In a simple implementation, all target names have no context (ie. the names all can be represented by element declarations, which are direct children of the schema element in an XML Schema document) and are assigned to the user's namespace. In alternative implementations, users may specify some structure in their namespace and 00 5 r target data components could also have an hierarchical context. For example, in the NGUI 700 of Fig. 7A, "Preferred Term" has no context. it is not contained within another specified element) If a context within the user's namespace was to be specified, the user could simply enter the context as part of the preferred name. Alternatively, the GUI may display a window with existing contexts able to be selected, or allowing a new context to be created.
A set of data examples for the source data component is then retrieved in step 508 from the data source for which the selected data component was defined. The number of examples retrieved can be predetermined or depend on the type of computing environment in which the mapping is being created. For example, if the data sources were being accessed over the Internet 101 and Internet access was being provided by a slow modem (eg. 1116), then fewer examples might be retrieved. The retrieved examples are then added in step 510 to an example list, which is displayed as the list 720 in the GUI 700 of Fig. 7A. In the case of the initialising source data component, the retrieved examples represent the initial example list. As further source data components are selected to be involved in the mapping, example data are added to the end of each example in the list.
For example, if the source data component SecondName is selected, then the example list may look like: Smith 693292 O Jones
SBROWN
WU
00 tHetherington oO 5 etc.
Note that some names are completely capitalised whereas others are not.
If a further source data component, FirstName, were then selected, then the example list would appear as (see Fig. 7B): SmithAlan JonesJenny BROWNLouise WUJulie HetheringtonRupert etc.
The example list 720 serves two purposes. First, the list 720 shows the user how the data is actually stored in the data source. Very few (database) schemas highlight notational standards that may have been adhered to when data was collected and assimilated into a database. Also, if this information exists, then it is typically very verbose. Examples often explain the standards much easier to users of the data. For instance, in the above example, a user may deduce that the data defined by the source data component SecondName has been compiled with little attention to case consistency (ie.
upper or lower case may have been used). On the basis of this information, the user may choose to apply a function to ensure that this data was either all upper or lower case in the user's view.
693292 S-31o The second purpose of the example list 720 is to provide an intuitive way for users to define mappings. Typically the task of defining mapping transformations is left to a system administrator or other such experienced person. This usually occurs because the creation of mappings typically requires an understanding of functional and 00 5 mathematical processes. As such, whereas a software engineer may understand that the N sequence of unary functions of toUpperCaseO, insert(6, "/test") applied to a source data component means take the data, convert it to upper case, and then insert the string "/test" at position 6 in the resulting string, an average user may not be happy to apply such means to create transforms. This notation has the additional difficulty of the user not understanding whether position index is zero-based or one-based.
The preferred arrangement provides the user with a means of implying these transforms by allowing the user to select an example from the example list 720, and then edit the selected example to demonstrate the form of the desired target data component.
For instance, in the above-mentioned example, the user could select the example "JonesJenny" and edit this example to read "JONES, Jenny". The data browsing application 120 then analyses the edited example and attempts to infer the applied function(s). In this case, the unary function, toUpperCaseo has been applied to the SecondName source data component, and then a connector of has been added between the two source data components. The result of this inference is shown in Fig. 7C. The method used to infer the transforms required by the mapping is described in more detail later.
It is also possible for the user to apply some presentation characteristics to the data of the target data component. For example, such presentation characteristics may stipulate that the SecondName portion of the target data component to always be 693292 -32o displayed in bold or in a particular colour. These characteristics can be also applied by Sdemonstration and then stored for use when transforming incoming data.
The above method of allowing users to define transforms by demonstrating the required transformation using an (edited) example, is an example of the technique known 00 oO 5 as "programming by example" or PBE. PBE is a technique that has previously been used MNi Nfor programming tasks such as inferring regular expressions from a set of examples Sprovided by the user, collecting and collating regularly-accessed information on the web, detecting and automating repetitive tasks in a user interface, and defining grammars (eg.
for e-mail addresses). These tasks, like that of defining transforms, typically require identifying abstractions or generalisations (eg. formulas, grammar rules) for a class of actions or data. In general, people appear to be more comfortable thinking about concrete examples than they are about abstractions such as functional transforms and grammatical rules. For this reason, the above briefly described method of interactively defining transforms uses edited examples, and thus intuitive to many people than methods based on selecting a set of functions to apply to the data as used in the prior art.
The GUI used by the preferred arrangement to perform a mapping, is described now with reference to Figs. 7A, 7B and 7C. It will be appreciated that, in the field of GUI's, the term "button" is colloquial name for an icon that is user selectable, for example using the mouse 1103. The name of the target data component is shown in a text field 701. Associated with the target data component field 701 is a "Presentation" button 702 and a "Defn" button 703. The "Presentation" button 702 can be used to view or edit presentation details for data conforming to the target data component. This functionality is discussed later in more detail. The "Defn" button 703 enables the user to edit the type definition information for the target. In a simple implementation, the user 693292 -33o may directly edit the XML Schema text for the definition of the target element.
SAlternative implementations may provide an interface that controls the editing actions of the user more tightly. Editing the type information is largely unnecessary for most transforms. This feature has been included in the preferred arrangement mostly for the 00 0 5 purposes of completeness for advanced users.
(Ni N The initialising source data component is shown as the first source data Scomponent in the mapping workspace 710, and the name of the source data component is shown in a text field dialog box 712. A function selector 713 is shown adjacent to the text field 712 to enable manual function selection to be used to thereby supplement the automatic process if desired. Preferably, a drop down menu of available unary functions may be selected from the selector 713. The manual selection and editing of functions are not essential and are only provided to supplement the automatic method for more advanced users. Each source data component in the mapping workspace 710 is also associated with an "lnfo" button 714 and a "Presentation" button 715. The "lnfo" button 714 is used to display any information that may help the user in defining a mapping. In the preferred arrangement, the "lnfo" button 714 is used to show any content that has been associated with a <documentation> tag in the XML Schema definition for the source data component. The "Presentation" button 715 can be used to assign, edit and view presentation characteristics that are to be applied to data defined by the source data component.
Each source data component in the workspace 710 is preceded by a connector 711. The connectors 711 may contain any connector text, binary operators (such as the mathematical functions or or n-ary operators (such as min, max, sum). Below the mapping workspace 710 is the example panel 720. In Fig. 7C, this 693292 O -34- U panel shows the results of the edited example described above. The results of the inferred a) Ssolution are also reflected in the function selector 713, where the unary function, toUpperCaseo, is shown in Fig. 7C as being applied to the source data component with the name SecondName, to provide that the second name of a person identified is 00oO presented in upper case format. As seen, more than one text field 712 and corresponding Sancillary components may be included in the mapping workspace 710. In this example, and shown in Fig. 7C, a connector 71 lb including a comma and a single character space is defined by the user to precede the FirstName term.
A checkbox 705 can be checked to control whether the context of the names used by the source data components is displayed. The context defines a hierarchical position of a source data component in the schema view. For example, from Fig. 6, the context of the SecondName source data component is HumanResources/Research /AppliedTechnology/Managers. Including contexts in the data component names, which can be long as shown by the above example, can make the interface appear complicated. Even if not displayed in the GUI 700, the context of each source data component is stored as part of a mapping. In an alternative implementation, the context for a source data component can be included as information presented to the user when the "Info" button 714 is selected.
Returning now to Fig. 5, once a user has selected the initiating source data component into the mapping workspace 710, the user can then decide in step 512 if the mapping is to involve further source data components. If so, then control returns to step 506 and the user can select the indicium of the desired source data component from the displayed schema view and drag the indicium, using the mouse pointer 1103, and drop it in the mapping workspace 710. If the drop position is located over an existing source 693292 o data component of the mapping, the data browsing application 120 assumes that the Ssource data component is to be replaced by the dragged component. Otherwise the dragged component is added to the end of the list of source data components. This results in the example list 720 being updated again as described above in step 510. This process 00 continues until the user decides that all the required source data components exist in the (Ni mapping workspace.
SThe order in which the source data components are moved into the mapping workspace can be important. For example, if a user wishes to create a new numeric target that was based on a transform where X (A B) C, then the source data components A, B, and C would need to be moved into the mapping window in the order A, B, and C, or B, A and C.
In step 514, the user selects an example from the example list 720 to edit. This action results in the selected example being highlighted and the user is able to edit the example as a string and thereby demonstrate to the system the required transformations that should be applied to the data. In such instance, no functions need be selected using the function selector 713. When the user presses "return" on the keyboard 1102 to indicate that the user has finished editing the example, the data browsing application 120 attempts to infer the transformation indicated by the user's example.
When the inference step 518 is complete, the example list 720 in Fig. 7C is updated according to the resulting inferred transform in step 520. This serves to clearly show the user the transform that has been inferred. If the inferred mapping is found to be correct in step 522, as may be determined by user observation, the mapping is then stored in step 524, the current schema view updated with the target data component in step 526, and the mapping creation process concludes in step 528.
693292 -36- O If the inference step did not accurately infer the transform(s) that the user desired, then the user can select another example and repeat the process until a correct result is achieved. The inference step as described later in reference to Fig. 8 only attempts to find solutions for the part of the example that was edited. So, in the example 00 5 depicted in Fig. 7, because FirstName was not edited it would not have been included in the analysis.
The process of Fig. 5 described above can be supplemented with functional editing of transformations required for mappings. For example, in the preferred arrangement, the user can also select functions from a list to apply to the data as part of the mapping. These functions can be added and removed from the function list for a particular source data component, and the parameters of functions can be simply edited.
The function selector 713 of Fig. 7A enables this functionality.
A target data component for a mapping is added to the current schema view such that its hierarchical context is the maximum common context of source data components involved in the mapping. For example, if a target data component was defined having three source data components having contexts A/B/C, A and A/B/C/D, the context of the target in the view would be A. As mentioned previously, the user can specify for each mapping whether source data components associated with the mapping are removed from the schema view (ie. data associated with the source data components would not appear in any data views derived from this schema view). A user indicates that a particular source data component is to be removed by checking the Remove Source Component checkbox 716 in Fig. 7C.
Fig. 10A shows an initial schema view before a mapping is applied. In this schema view, schema component SA contains data components A, B and C, and data 693292 O -37o cdmponent B further contains data components D and E. Fig. 10B shows the result of adding a target data component, Z, which is derived from data components D and E, to a schema view without removal of the source data components associated with the mapping. Fig. 10C shows a similar schema view, but with removal of source data 00 components. Preferably, the user can define this property for each source data component (Ni involved in a mapping. In other words, some source data components of a mapping can Sbe removed while others are retained.
If a target data component is associated with more than one mapping then, in the preferred arrangement, the target data component is also inserted into the schema view having a hierarchical context, which is the maximum common context of the context of the various mappings. This is useful because this operation allows the user to manipulate the target data component to select data from multiple data sources using a single constraint.
When a mapping has been defined, selecting a "Save" button 730 causes the mapping to be stored. Such an action also, is preferably used to update the current schema view with the new target data component.
4. Inferring Mapping Transforms from an Edited Example A method of inferring the transforms associated with a mapping from a useredited example is described now with reference to Fig. 8, which is a flowchart representative of a computer application program that may be stored in the memory 1109 of the local computer 100 and executed as part of the data browsing application 120 by the corresponding processor 1105. The method begins with the user submitting an edited example for analysis. Such submission may be by way of the GUI 700. The data browsing application 120 in an initial step 800 creates an empty solution list. A 693292 O-38- U determination is then performed in step 802 to establish whether binary or n-ary functions are possible. Preferably, binary or n-ary functions are only considered possible if there is more than one source data component involved in the transformation, and (ii) at least one pair of consecutive source data components can participate in a binary or n-ary 00 5 operation. (eg. have a numeric data type). Note that, concatenation is strictly a binary (Ni operation but in the analysis, concatenation is treated as the default binary operator.
SIf binary or n-ary functions are possible, control passes to step 804 where the data browsing application 120 creates a list of unary contenders for each of the source data components for the selected example. A unary contender is the possible result of applying one or more of the predetermined unary functions to unedited example data for a source data component. Unary functions are defined to be those functions that act on a single source data component. In a preferred arrangement, unary functions can be applied in sequence, with the maximum number of functions that can be applied in any sequence being three. Clearly, other limits to the number of functions that can be applied in sequence can also be used. In other words, each unary function in a sequence is applied to the result of the previous function application step. The unary functions preferably implemented are shown in Table 1. Other functions may also be implemented without departing from the scope of the present disclosure.
Table 1. Preferred Unary Functions Function Name Operand Description of result Type initWord(n) String String containing the first n words of the operand words(start, n) String String containing n words starting from start word.
init(n) String String containing first n characters of the operand toUpperCaseO String String representing operand converted to upper case toLowerCaseo String String representing operand converted to lower case 693292 -39capitaliseO String String in which the first characters of all nonconjunction words in the operand are capitalised capitaliseAll() String String in which the first characters of all words in the operand are capitalised toLanguage(xml:lan String String in which the operand is translated to the g) language specified by the xml:lang noPunctuation( String String in which all punctuation in the operand is removed insertText(text, n) String String in which the string 'text' is inserted at position n in the operand.
noConjunctions( String String in which all the conjunctions in the operand have been removed toNumber() String Number if the operand can be parsed as a number toStringO Number String representing the operand negate() Number Numeric value which is the negation of the operand tolnteger( Number Numeric value of the operand as an integer (rounded if necessary) toDoubleo Number Numeric value of the operand as double precision number Each implemented function has a specified operand type, and a description of the result is shown in the third column of Table 1. If the operand type criterion is not satisfied then a unary contender does not result from the application of the function. The initial operand types are obtained from the base primitive types of the XML schema definitions of the source data components. In a preferred arrangement, these primitive types are mapped to the base types of Integer, Double and String as shown in Table 2.
Alternatively, it may be preferable to use the XML Schema base types as those of the mapping application. In such a case the type mapping shown in Table 2 would not be necessary and the operand type of Table 1 may contain XML Schema primitive types.
693292 Table 2. Mapping of base XML Schema primitive types to based types Base Type Base XML Schema Primitive Types Integer decimal, gYear, gMonth, gDay Double Float, double String All other primitive types Returning now to Fig. 8, in step 804 a list of unary contenders for each source data component of the example is generated. The preferred order that contenders are added to this list is as follows: the unchanged source data component; (ii) contenders that result from the application of a single unary function; (iii) contenders that result from a sequence of two unary contenders; and (iv) contenders that result from a sequence of three unary contenders.
The order in which unary contenders are created is significant in that functionally simpler contenders are preferably located at the top of the list and therefore are more likely to be involved in a solution.
In step 806 each of the unary contenders is tested for presence in the edited example. Such operates to filter the unary contender list for each source data component, with each member of the filtered list having a valid start and end position in the edited example. Step 806 results in a filtered unary contender list. this list is required for step 810 (described below). Control then passes to step 807 where n-ary solutions based on the unary contenders are detected. The process of detecting n-ary solutions involves testing all combinations of unary contenders. Detected n-ary solutions are added to the solution list. Control then passes to step 808 where binary solutions based on the unary 693292 -41o contenders are detected. Binary functions (or operations) are assumed in the preferred d arrangement to operate from left to right. In other words, the operands of an operation can be the result of the previous operation plus a new contender. The process of detecting possible binary solutions involves testing all combinations of unary contenders, with each 00 combination having an ordered contribution from each of the source data components.
The binary solutions found are added to the solution list.
SIf it was determined in step 802 that binary or n-ary functions were not possible, then the equivalent of steps of 804 and 806 are merged in a single step 815. This is advantageous because the merger removes the need to store the large unary contender lists for each of the source data components.
In step 810, which follows each of steps 808 and 815 a search is performed for solutions based on the filtered unary contender lists. Each solution must be composed of a filtered unary contender for each source data component. A further requirement of a solution is that the unary contenders do not overlap in the edited example. For example, if a unary contender for the first source data component was located between character positions 3 and 15 in an edited example and a unary contender for the second source data component was located between character positions 10 and 20, then these contenders would not be considered part of solution because of the overlap between the sets of character positions. Any solutions found in step 810 are then added to the solution list.
The "fittest" solution in the solution list is then determined in step 820. In the preferred arrangement, the cost of any solution is based on two components: the total length of the connectors between contenders in the edited example; and, 693292 O 42- S(ii) the weights assigned to individual functions to bias the inference 0) method to find simpler solutions.
Solutions are examined in the order they are added to the solution list so that if a solution is found with a zero cost then step 820 ends immediately. Solutions later in the 00 list must have a lower cost than a solution earlier in the list. Although connectors are (Ni really a form of binary operation (ie. concatenation), they are treated as an important contributor to the cost of solutions in the preferred arrangement.
For example, consider the following solution where the parts of the edited example that correspond to contenders are enclosed in boxes: MITH
I
John [Address: 1100 Main St, Newcastle, NS This represents a solution if each source data component is represented in the correct order (ie. SecondName, FirstName, Address). The connector cost of the above example would be proportional to the total length of connectors (2 11 1 14), and is determined using a sum of characters not attributed to unary contenders. Unary functions may have been used for some of the contenders (eg. toUpperCaseo) and so the final cost of the solution would depend on whether costing weights were assigned to the used unary functions.
When the fittest solution is detected, the mapping is updated in step 822. In the preferred arrangement, this results in the function list for the source data component in the mapping workspace 710 being updated with the names of any unary functions (and their identified arguments). The connector fields are also updated with either the connector strings or any identified binary or n-ary functions that were required for the fittest 693292 O 43- O solution (see Fig. 7C). The example list is also updated using the new mapping. The mapping process concludes in step 830.
The process for inferring the mapping transforms may vary depending on the type of transforms that a user may wish to perform. Other functions, unary or binary, or 00 indeed n-ary, may be introduced into the process without departing from the scope of this N disclosure. In the preferred arrangement the addition of new unary, binary and n-ary Sfunctions is relatively simple because such merely requires a class to be added to the system which extends either the UnaryFunction or BinaryFunction or NaryFunction classes and the new function to be added to the corresponding function list. Contenders based on the new function would immediately begin to be generated.
The process described using Fig. 8 is that followed when the transforms of a mapping are to be inferred from scratch (ie. making no assumptions about previous inference sessions or any manually entered transforms that the user may have recorded).
Sometimes it is not possible to unambiguously define a mapping by the editing of a single example, and so the inferring process operates only on the changed part of the edited example. The objective is to refine a part of the mapping.
In the preferred arrangement, the data browsing application 120 detects only those parts of the example that the user edited in the current editing session. This means that an existing mapping can be refined and removes the need for unnecessary processing in the analysis. So, for example, if an initial mapping has three source data components and the user selects an example and only changes the text associated with two of those components, then the inference method described is performed on a subset of the data.
The inference method achieves this by detecting which source data components are 693292 O 44- O affected by the change, and then attempts to find a solution for just the changed part of the example.
This results in a quicker implementation, and also means that the process can be more responsive to the user's changes. For example, rather than waiting until the user has 00
OO
5 finished editing the example and submitting the changed example to the data browsing Napplication 120 for analysis, the analysis can optionally be performed interactively in parts. If the user moves the cursor by more than some threshold distance in the user interface, then the analysis method can be initiated to generate a solution for the changed part of the example only. The resulting solution is then integrated into a total mapping solution in readiness for any other changes. One issue arising from a progressive approach to finding a mapping solution is that the system must be able to respond quickly to the user's changes. In many cases, a sufficiently quick response may require a smaller set of possible functions to be implemented.
As well as editing the text of the example, the user can also apply various presentation characteristics to the example being edited. For instance, in the preferred implementation, the user can select font type, font size, style (eg. bold, italic, underline, superscript, subscript, etc.) and colour characteristics to parts of the edited example.
Once the data browsing application 120 has identified a solution using the process described above, the data browsing application 120 can then attribute presentation characteristics to the source data components if they have been applied. Presentation characteristics are assumed to always be applied after any structural transforms have been applied (ie. it is the last transform to be applied to the source data component before that data is included as part of the target data component).
693292 O o If the user applies a presentation characteristic to the entire example, then the Sapplied characteristic is associated with the target of the mapping and not the source data components. Accordingly, if the user adds a further source data component to a mapping, the data of the further source data component will acquire the presentation characteristics 00 stored with the target data component. However, if a presentation characteristic is only (Ni applied to part of the example, then the system will infer which source data component(s) Sare affected and store the presentation characteristics with only those source data components. For example, in the name example used previously, the user may wish to always display the SecondName part of MyName in bold (eg. SMITH, John).
The user can view the presentation characteristics associated with any source data component by selecting the corresponding Presentation button 715 in the screen layout GUI shown in Fig. 7A. Presentation characteristics attributed to the target data component can be viewed by selecting the corresponding Presentation button 702. The preferred arrangement also allows the user to manually add and change presentation characteristics using the presentation function of both target and source data components.
This may be achieved by selecting the buttons 702 and 715a respectively. If presentation characteristics are defined for both source and target data components, then the characteristics associated with the source data component(s) will be applied before those of the target data component.
One class of mapping transformations, which is critical for data aggregation purposes, is transformations of values having dimensions (and units of measurements) or currencies. Currently many data source schemas do not convey the semantics of measurement or currency mostly due to the fact that the data sources were created without the expectation of being used outside the domain of creation. This means when a user 693292 O- 46- 0 from outside of this domain was to view the data, simple field names such as YTD Sales or DistanceTravelled do not convey sufficient information. For example, are the sales values quoted using US$ or AU$, and is the distance in miles or kilometres. While Sinsufficient schema information is provided for data sources, it is up to the user creating 00 the mapping to define the required transformations in the mappings by, for example, specifying a conversion factor.
SHowever, if the definitions of a data source are represented using XML Schema it is possible that the semantics of measurement can be adequately represented. The defined data types of XML Schema already provide for the semantics of time (and date).
Although there are currently no standardised semantics for measurement, the data browsing application 120 uses a library of dimension types with each dimension associated with a predefined set of possible units. Example 1 of XML schema below represents a definition of a length type. This type extends the XML Schema data type of float and each element using this type is associated with a units attribute.
XML Schema Example 1: xsd:complexType name 'length' <xsd:simpleContent> <xsd:extension base="xsd:float"> <xsd:attribute name="units" type="lengthUnits" use="required"/> </xsd:extension> </xsd:simpleContent> </xsd:complexType> <xsd:simpleType name="lengthUnits"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="feet"/> <xsd:enumeration value="metres"/> etc.
693292 O 4 7 0 </xsd:restriction> </xsd:simpleType> Data source schema designers can therefore declare elements that use the predefined dimension types. For example, an element MyLength may be defined as: 00 ¢n <xsd:element name 'MyLength' type 'length'/>
(N
and then used in a document as follows: N <MyLength units= 'metres'>100</MyLength> Alternatively if the units were to be fixed for all instances, the element declaration may be used to refine the base type by restricting the value of the units attribute to be "metres".
If data sources use this method of defining dimensions and units, the data browsing application 120 checks when mappings are being created that each of the source data components have similar dimensions. A mapping is not permitted if the sources of the mapping are not dimensionally consistent. The user can indicate a mathematical operation between sources that have the same dimension but different units by inserting the required operator (ie. between the operand sources. The data browsing application 120 then uses the dimension library to perform the necessary conversion, with the target of the mapping having the same units as those of the first operand of the operation.
For example, consider the case where a user selected first a source data component DistanceTravelled (which extended the length type and used units of kilometres) and then a second source data component DistanceFromSource (which also extended the length type but used units of miles). If the user then edited an example of the data by inserting a operator between the representative values, the resulting 693292 O -48- O representative data values would show the sum of the distances in kilometres. If the user Swished that the resulting values were represented using the units of miles, then the user can alter the order in which the source data components are selected. Alternatively, the type information of the target data component of the mapping can be explicitly edited to 00 use the units of miles.
(Ni N The data browsing application 120 can only perform these unit conversions when the data sources are appropriately described. It should be clear that currency conversions can be performed using substantially the same method with currency being defined as a dimension and the units being the individual currencies. In the preferred arrangement, the data browsing application 120 requests a conversion rate from an on-line conversion process. In the situation where this conversion process is not available due to network problems, then the conversion is performed using a table of conversion rates.
6. Presenting Data Views Data views can be created either using schema views as described in Section 2, or using visual methods as described in Section 6 of this disclosure. These data views are presented to the user by the data browsing application 120.
The preferred method of presenting data views in the data browsing application 120 is now described. In this method, the user works in a GUI environment 1200 displayed by the data browsing application 120 upon the video display 1114 as depicted in Fig. 12A. On the left of the GUI 1200 is a datamarks panel 1205. Datamarks are similar to the web browser bookmark concept, in that a datamark represents a link to useful information. Preferably the datamarks panel 1205 is a tree containing items, with each terminal item being associated with a URI. In alternative arrangements, the datamarks panel 1205 may be implemented as a simple list.
693292 O -49o The URI may correspond to a data source or a previously created data view. In the preferred arrangement, data sources comprise XML documents and data servers. Data servers are described in more detail in Section 1. In alternative arrangements, other types of data sources may be permitted Microsoft Excel® spreadsheets). In these 00oO alternative arrangements the data browsing application 120 provides a method for Ngenerating XML from the data source.
In the datamarks panel 1205, shown in Fig. 12A, the data sources and data views are maintained in separate sections as two nodes of a tree). This is not essential and is done in the preferred arrangement to help the user differentiate between primary and derived data sources. Also at the top of the datamarks panel 1205 are selectable panel options 1204 for adding and organising datamarks in the panel. If the user selects to add a datamark, a new datamark is created and added to the datamarks panel 1205 for the currently selected data view. The name of the added datamark is assumed to be the title of the data view and the URI is assumed to be the URI of the data source or data view definition. In the preferred arrangement, the datamark is added to the appropriate section (data source or data view node) of the datamarks panel 1205.
Selecting an item from the datamarks panel 1205 results in the generation of XML data. In the case of the data source, preferably the URI contains an XQuery which is used by the data server to generate an XML data result. If the XQuery is not specified for a URI, then the default request of is assumed. In the case of a data view, the data browsing application 120 reads in the data view definition (which is generated as described later in Section 11), generates the appropriate query request(s), collates and formats the XML data from the query request(s) according to the data view's definition.
In each case, before the XML data is presented to the user, a check is made to see 693292 O o whether any mappings are relevant to the incoming data. If relevant mappings are found the corresponding data transformations are performed. These transformations can involve combining one or more data components from one or more data sources. The preferred process of applying mappings when a user selects to see a data view is described in 00 Section 11.
(Ni To the right of the datamarks panel 1205 in GUI 1200 is a workspace 1202. When a user selects a datamark, the resulting data view is displayed in the workspace 1202.
Preferably, the workspace 1202 is organised as a grid with each data view that has been selected for viewing being displayed as a rectangular grid unit. The preferred size of displayed data views (and hence the number of grid units displayed in each row of the grid in the workspace) is specified as a user preference. The user can select to re-size and move data views in the workspace. If this occurs the grid layout is relaxed to a manual layout, however the preferred data view size is still used when displaying new data views in the workspace and the grid layout is used to determine an initial location of a new data view.
The GUI 1200 also allows the user to modify the data views displayed in the workspace 1202. For example, the user can modify presentation properties fonts, styles, colours, etc.), apply filters, change the sort order, specify and apply transformations that may apply to one or more data components, etc. At any time the user can save a selected data view. If the data view originated from a datamark corresponding to a data source and the user had made modifications, a new data view definition is created. The user can then specify where the created data view definition is to be stored.
If the data view originated from an existing data view definition, then the user can select to either update the existing data view definition or create a new data view definition for 693292 -N51o the selected data view.
SIn an alternative arrangement, the collection of data views occupying the worksheet 1202, can also be saved. In this case when a user saves his/her work, the user can select to save the entire workspace 1202, including the new data view. This 00 5 workspace 1202 can be exchanged with other users. In a further variation, data views are
C,
always saved as part of a workspace 1202. This workspace 1202 can contain any number Sof data views and other workspaces. It is also possible for contained components of a workspace to be laid out according to a layout type for the workspace other than the grid layout type previously described in columns or row). As described above, a workspace can also be laid out manually. A workspace can act like a package that can be exchanged with other users. On receipt of a workspace, for example via e-mail or by a URI link, a receiving user can choose to unpack the workspace by dragging contained workspaces to the datamarks panel 1205 substantially as shown in Fig. 12.
Alternative arrangements can also allow more than one workspace to be open at once. Open workspaces not currently being viewed may be accessed via a set of tabs located above the status bar 1290 of Fig. 12A.
When a user selects a datamark from the datamarks panel 1205, the data view associated with that datamark is presented in the workspace in the next available grid position. The presentation process is described in more detail later in this section. If the workspace is clear, then the data view will appear at the top left hand corner of the workspace. Alternatively the user can select the clear workspace control 1234 on a toolbar 1207 in Fig. 12A before selecting the datamark. If existing data views are displayed in the workspace 1202 and the clear workspace control 1234 is selected, then the user is prompted to save those data views if they have been modified from their 693292 O -52- Soriginal state.
d SThe user can also present a data view by typing the URI of a data source or data view definition in the open location control 1208 below the toolbar 1207 in Fig. 12, selecting the desired location from a history list, which is viewed by pressing an 00 icon 1209, or by using the "Open Location" function on the File menu. In each case, the Sresult is the same procedure as described for selecting a datamark.
SThe preferred method of displaying a data view associated with a datamark is now described with reference to the process flow in Fig. 31A. The user selects the appropriate datamark in the datamarks panel 1205 and a GUI object 3110 of the data browsing application 120 passes the URI associated with this datamark to the workspace controller object 3115. This object ascertains whether the received URI corresponds to a data source or a data view definition. In the latter case, the object 3115 locates and parses the data view definition, which, in the preferred arrangement is stored as an XML document, into a tree structure comprising the data view's definition. Preferably the data view's definition is represented using a Document Object Module (DOM) object. Data view definitions are described further in Section 11.
The workspace controller 3115 then creates a data view presenter object 3120 to present the data view. The workspace controller 3115 passes to the object 3120 either the URI of a data source or a node of data view's definition. In the preferred arrangement, the root node of the data view's definition is passed to the data view presenter object 3120. However, in other arrangements, the data view definition may be organised differently with descriptive nodes when and by whom the data view was created) not being passed to the data view presenter object 3120. Preferably, each displayed data view in the workspace 1202 is associated with its own data view presenter object 3120.
693292 O -53- O The data view presenter object 3120 then creates a new data view manager object 3125 to obtain the data for the data view to be presented. The URI (of a data source) or the data view definition node is passed to the created data view manager object 3125. If the data view manager is initialised with a URI, then the data view 00 manager 3125 requests the XML store object 3140 to fetch the URI, parse the resulting
C,
N stream corresponding to the XML document into a DOM-like structure, hereinafter called an XML Schema DOM, or simply XSDOM, and return a handle to the created structure.
The XSDOM structure differs from the DOM structure in that element and attribute nodes provide additional methods from those of the DOM Level 2 Application Programming Interface (API). The additional methods locate XML Schema definitions for the abovementioned nodes. These XML Schema definitions are used by a data view presenter object 3120 to appropriately present the data associated with the data view. The data view manager object 3125 uses the provided handle to the XSDOM object created by the XML Store object 3140 to create its own XSDOM structure which acts as the data for the data view being presented. This XSDOM structure is the structure 3130 in Fig. 31A.
Preferably the data browsing application 120 has a single XML Store object 3140.
This object caches fetched XML documents in an object cache. Hence before the XML Store object 3140 initiates an HTTP fetch of XML data, it first checks whether the document is in cache and whether the cached copy is still up-to-date. The XML Store object 3140 also receives requests for XML Schema documents. These requests arise from XSDOM requests for definitions of elements and attributes. The XSDOM element and attribute nodes can identify their XML namespace and attempt to locate XML Schema documents that have definitions for that namespace and thus perhaps for the particular element or attribute. The XML Store object 3140 locates XML Schema 693292 O -54- U documents using the schema location URIs included in XML documents. It parses these Sschemas into schema objects and caches them for future use in the XML Store cache 130, corresponding to the database 130 of Fig. 1. As with XML documents, before schema objects are used from the cache the XML Store object 3140 checks that they are still up- 00 to-date.
If the data view manager object 3125 is initialised with a data view definition, it extracts the query associated with the definition and requests the XML Store object 3140 to fetch the data it requires in order to process the query. Each request results in a handle to an XSDOM object in the XML Store object 3140. The data view manager 3125 then uses these handles to obtain the necessary data to create its own XSDOM structure which corresponds to the data for the data view 3130. This data may represent mapped, filtered, sorted, grouped data from one or more data sources. If the data is obtained from more than one data source, then it represents a join across those data sources. Joins are described further in Sections 6 and 7 of this disclosure The presentation process performed by the data view presenter object 3120 requires an analysis of the data view's XSDOM data 3130, and the associated schema definitions, in order to select the most appropriate presentation or display type. Once the most appropriate display type table, graph, scatter plot, 2D grid, etc.) is selected, the data view presenter object 3120 renders the data using the selected display type and passes a handle to this rendered data view to the workspace controller 3115 for presentation to the user.
Preferably the rendered data view is a Scalable Vector Graphics (SVG) object.
The data browsing application 120 has a set of SVG templates for each display type, with each display type being associated with a default or preferred SVG template. On 693292 O O selection of the preferred display type, the data view presenter object 3120 selects the Sdefault SVG template, for the preferred display type, and populates it with data from the data view's XSDOM structure 3130. The result of this population is a renderable SVG object, which can be displayed to the user and with which the user can interact.
00 5 The presentation analysis performed by the data view presenter object 3120 is
C,
N described in more detail in the remainder of this section. It should be clear that the Sdescribed method can be generally applied to hierarchical data and hence is not limited to use in the data browsing application 120.
The method for selecting the most appropriate presentations operates in three phases, as depicted in the flowchart of Fig. 31B. The first, analysis phase 3160, examines the structure of the hierarchical data, from the data itself or from schema definitions of the data if such are available, or from both, to identify the existence of regularly occurring data items and determine whether a representative base table data structure andflat data table can be constructed. The presence of the latter indicates that the data is 1 or 2dimensional and hence a graph or xy plot presentation may be appropriate. The second, elimination phase 3170 is responsible for examining the data and/or its schema definitions to determine which display types are not appropriate. The elimination phase 3170 makes use of a set of elimination rules, each having an associated condition and a list of elimination candidates. When the condition of a rule is satisfied, its list of candidates are eliminated from the list of possible display types.
When all elimination rules are processed, it may be possible that more than one display type remains. If this is the case then a third, preference phase 3180, begins, in which a set of preference rules are processed to order the remaining candidates. These rules test for attributes such as the number of rows or columns in a table, or the number of 693292 O -56- U segments in a pie chart. For example, if there are more rows than columns then it may be more appropriate to swap the rows and columns so that the height of the table is less than its width. This rule preferably takes into account the size of the data view and hence the number of table columns that are realistically viewable.
00 Once the list of possible display types has been ordered, the data is laid out Saccording to the best display type and presented to the user. A menu listing all the Sdisplay types in the list, in the order previously determined is also presented to the user to give the user the option of selecting another appropriate display type.
5.1 Display Types The data browsing application 120 supports the following display types: tree, table, bar graph, line graph, pie graph or collection thereof, xy scatter plot (or simply xy plot), and 2D grid. Fewer or additional display types may be used. Display types comprising collections of one or more of the above display types may also be used. With the exception of 2D grid, and tree display types, there are sub-types for each type. For some tables, the user may have the option to view the transpose of the table rows and columns are transposed). For bar, line, pie graphs and xy plots, there are row-wise and column-wise sub-types. The tree display type is a generic display type that can be used to display data of any hierarchical structure. It simply shows the data in its natural hierarchical form.
The base table data structure underlying all non-tree display types is a tabular display format where instances of individual data components are laid out in columns.
An example XML data and its base table representation are shown in Figs. 32 and 33 respectively. In Fig. 33, the column headings 3301-3304 identify the data components present in the XML data, and are typically the names of elements and attributes in the 693292 S_-57o data. Only those attributes, which are considered to hold primary data, are treated as data components. Therefore, attributes belonging to the XML instance or XML Schema instance namespaces xml:lang, xsi:schemaLocation, etc) are not treated as data components because the role of these attributes is to provide information to the processing 00 application. Similarly any attributes belonging to the namespace of the data browsing application 120 are also not treated as data components. Such attributes may have been Sused to store an alternative name for a data component or to temporarily list a data component. Such attributes are described in Section 7 of this disclosure.
Shown in each column are the values 3305-3311 associated with these elements and attributes. The contents of the columns are ordered such that the values of attributes and sub-elements of the same XML element are shown on the same row in the table.
Thus, "Jan" (item 3307) and "123000" (item 3308) appear on the same row since they originated from the month attribute and the value of the same Sales element in the XML data. Both of these in turn are associated with Apparel (item 3306) since they are subelements and descendants of the same (first) Dept XML element. Note that this Dept element also comprises three other Sales sub-elements, and hence Apparel is repeated three more times in items 3309, 3310, 3311 in the Dept column. However, for reasons of clarity, when the same value appears consecutively in the same column, only the first incarnation is usually shown, and the remaining is usually left blank.
A base table data structure can be fully expanded as shown in Fig. 33, where the values all attributes and sub-elements are listed. Alternatively, the base table data structure may be displayed in non-fully expanded form, where the contents of certain XML sub-elements are not shown. Instead, the sub-elements are represented by hyperlinks in the table. An example of a non-fully expanded base table data structure is 693292 O -58- U shown in Fig. 34, in which 3401 and 3402 are hyperlinks. Hyperlinks are typically used to reduce the depth or dimension of the XML data (and hence the size of the displayed table) to a manageable level.
Hyperlinks can thus be used to enable browsing of a data source in the data 00 browsing application 120. By selecting a hyperlink in the presented data, the user can (Ni select a further context node for presentation and thus browse to further content in the Sdata source. Alternatively, the user can select to view the content of the hyperlinks within the current display type. For example, the user can select to view a graph within the cell of a table. Selective viewing of the contents of hyperlinks within an existing data view results in a composite display type.
The base table data structure can be used directly for a table display type. For example, data having a repeated pattern of the same sub-elements (or attributes) is best presented to the user as a table with each element or attribute constituting a column of the table. However, with some data patterns, such as that represented by the base table data structure in Fig. 33, the hierarchical data can be flattened by promoting some data to be column or row headings. Whilst the base table data structure is useful for conveying the underlying structure of the XML data, and allows for easy manipulation of the data as described in Section 6, a flatter table structure is usually a more effective presentation format.
A flatter table data structure can also be more suitable for identifying the bar, line and pie graphical display types, since these display types are essentially methods for presenting the relationship between two data components that have a one-to-one correspondence between one another. When such data components exist, the content of one data component is displayed as column headings in the table, and the contents of the 693292 O -59o remaining data components are shown under their corresponding columns. If the contents of the data components exist in more than one subset, then each subset is displayed as one row of data in the table. The presence of subsets is indicated by the existence of a third data component, which has a one-to-many correspondence with the first two data 00 components. The contents of this third data component can then be used to identify the different data subsets, and are typically shown in a column of row-headings in the Sdisplayed table. If there also exists another data component with a single value, then it may be appropriate to use its content as a caption for the table.
An example of a table display 3501 obtained by flattening the base table data structure of the XML data of Fig. 32 is shown in Fig. 35. In Fig. 35, the column headings 3502 are values of the data component 3303 Month, whilst the data cells are values of the data component 3304 Sales. The table also comprises a column of row headings 3503, which are the values of a third data component 3302 Dept. The names of the data components 3303 and 3302 that make up the column and row headings, Month and Dept respectively are shown in the top-left comer cell 3505 of the table. Finally, data component 3301 comprising a single data value is displayed as the table caption 3506.
The transposed form of the table display of Fig. 35 is simply a table with its rows and columns swapped. An example of such a table, based on the XML data of Fig. 32 is shown in Fig. 36. When a table, such as Fig. 35 or Fig. 36, is presented to the user in the workspace 1202, the user can also select to see the base table form for the data (i.e.
Fig. 33). In the preferred arrangement, theuser can specify whether data components that have a single value, such as item 3301, are included in the base table data structure. If the user selects not to include these data components, then one of them, typically the first data 693292 O Scomponent, is used to compose a caption for the displayed data.
When the values of the second data component displayed in a flattened table display type are numerical, then it may be possible to present the XML data as a bar, line, or pie graph. If this is the case then there exists direct a mapping between the contents of 00 5 a table display and those of the graph displays. For the cases of the row-wise bar and line (Ni N display types, an example of the latter of which is shown in Fig. 37, each row of the table Sis shown as a data series 3701 in the graph. The row header associated with each row constitutes the name of the data series 3702, and the column headers become labels along the x-axis 3703. For the column-wise bar and line graphs on the other hand, an example of the former of which is shown in Fig. 38, each column of the data table 3501 is mapped onto a data series 3801, with the column header mapping onto the data series' name 3802, and the row headers mapped onto the x-axis labels 3803. Both examples shown in Fig. 37 and Fig. 38 are based on the flat data table used by Fig. Bar and line graphs can preferably have up to two different y axes (not shown in Fig. 37 and Fig. 38), one located on the left edge of the graph area, and one on the right.
Different y axes are typically used for plotting different sets of data, for example temperature and rainfall variations, where each set is represented by a distinct data component. The values of one data component are plotted against the left y-axis, and the values of the second are plotted against the right y-axis. The preferred arrangement maintains an axis index for each data series in the flat data table.
Although the preferred arrangement only permits graphs with a single x-axis, multiple x axes could be allowed in alternative arrangements where the base table data structure is used to find graph groups which are located in nested hierarchies.
Alternatively, multiple x-axis arrangements could use multiple flat data tables, one for 693292 -61- 0 each x-axis.
d Similar mappings used for bar and line graphs 'are also used for the row-wise pie and column-wise pie graph display types. In the former, an example of which is shown in Fig. 39, a row in the table is shown as a pie chart 3901. If more than one row exists then 00 multiple pie charts are displayed. The column headers of the table are mapped onto the (Ni labels 3902 associated with the pie segments in each chart. The row headers map onto the Stitles 3903 of the pie charts. In the latter column-wise display type, each column rather than each row is shown as a separate pie chart. If more than one column exists then multiple pie charts are displayed. Labels for the pie segments in each chart are obtained from the row headers of the table, whilst the charts' titles are obtained from the column headers. The example shown in Fig. 39 is based on the flat data table used for Fig. The xy plot display type is another display format used for presenting numerical data. As in the cases of bar, line and pie graphs, the use of xy plots require the presence of two data components that have a one-to-one correspondence between one another.
One data component, referred to as the x-component, serves as coordinate values for the x-axis, whilst the other serves as coordinate values for the y-axis and is called the ycomponent. Again, as in the cases of the table, bar, line and pie graph display types, if there exists a third data component, called the series label component, which has a oneto-many correspondence to the x and y components, then the x and y components are said to be divisible into multiple subsets, in which case each subset is displayed as a separate data series in the plot. Unlike the preceding display types however, the xy plot display may incorporate an additional data component if it has a one-to-one or one-to-many correspondence with the x and y components. This additional data component, if it exists, serves as labels for each data points in the plot and is called the point label component.
693292 O -62o The presence of the point label component, if it has a one-to-many correspondence with the x and y components, enables the creation of a column-wise xy plot display type, as opposed to the above which is also referred to as a row-wise xy plot. The column-wise xy plot is produced in the same way as its row-wise counter-part, with the exception that 00 the roles of the series label and the point label components are swapped.
An example of the row-wise xy plot and its corresponding base table data structure are shown in Fig. 40 and Fig. 41 respectively. In the figures, data components 4001-4004 serve as the series label, point label, x, and y components respectively.
The 2D grid display type is a display format primarily used for data with pictorial content, but may also be used to display text-only data. It is typically generated from the base table data structure, in which the contents of each entire row of the table are presented as a single data item. The set of items are then laid out in a regular 2D grid pattern whose numbers of rows and columns are dictated either by the user or by the dimensions of the workspace. Each item in the grid comprises a list of property and value pairs. The properties are the column headings of the basic table display, whilst the values are the data contents under the corresponding columns.
An example of a 2D grid display type and its corresponding base table data structure are shown in Fig.42 and Fig. 43 respectively. Each cell in the grid contains a property named Photo 4201, which refers to the storage location of a photograph of an employee. These photographs 4201 are shown in the 2D grid display, along side the remaining data components Name 4203 and Ext 4204.
The 2D grid display type is also used in the preferred arrangement to display a list of data items, where each data item represents a link to further information. Preferably, 693292 O -63o as mentioned earlier, the user can select to view the contents of these links in-line resulting in a composite 2D grid display type.
In the preferred arrangement, the user can manipulate copy to another data view, apply a filter, sort, transform or combine) data components. These data 00 components may be data nodes, data sets or data series. A data node, such as a node of a tree, can be uniquely identified by an XPath expression which corresponds to the node's Slocation in the document. A data set, on the other hand, such as a column of a table or a data series of a graph, can be identified by an XPath expression which corresponds to an iterator and an optional path relative to the iterator. So for example, in Fig. 36 the iterator for the data set corresponding to the Apparel column is: Company/Dept[name="Apparel"]/Sales This data set does not require a further path to be specified in addition to the iterator. The optional path is preferably used for tables where all columns have the same iterator (i.e.
all elements for a row have the same parent element).
In addition to the ordered list of display types produced by the preferred arrangement, iterators (and optional relative path) are provided for all data sets. These allow the data elements to be readily and specifically obtained from XSDOM documents 3130 created by the data view manager object 3125 in Fig. 31A for particular queries. The data view manager object 3125 also uses this information to modify queries of existing data views and construct new queries. This process is described further in Section 6.
The tree display type does not require the specification of data sets. The path for a data node of a tree display type can be taken directly from the data it is already provided by the XSDOM API).
693292 O -64- S5.2 Analysis Phase SThe process of selecting and ranking display types begins with the analysis phase 3160 of Fig. 3 lB. In the preferred arrangement, data is expressed in standard XML format. Other data formats are also possible.
00 F 5 The analysis phase 3160 of the preferred arrangement is responsible for analyzing the contents of an XML tree, identifying and extracting the relevant items from the tree N and appropriately constructing from these a base table data structure. A base table data structure provides a means for detecting regularly occurring data items in the XML tree and identifying relationships between data items. Each column of the base table data structure represents a distinct attribute or element in the XML tree. The values listed under each column are instances of these attributes and elements that exist in the XML tree. In other words, each column is a data component. Further, the data for each row pertains to a single entity.
The placement of data in the base table data structure takes advantage of the implied correspondence between items residing on the same rows to capture the structural relationships between data elements in the XML tree. Preferably, the XML tree is traversed in the depth-first fashion during which the base table structure is populated from left to right. That is, when a sub-element is encountered, its attributes and contents are placed in the table immediately to the right of the attributes and contents of its immediate parent. If a parent element contains multiple instances of the same child-element, then these instances are placed underneath one another in the same column, to depict that there is a one-to-many relationship between the parent element and the child element.
Different types of child-elements sharing the same parent element occupy adjacent columns in the table.
693292 O O A base table data structure can generally be constructed from an XML tree of any depth or dimension. However to ensure manageable sizes, the structure is typically limited to dimensions of 2 or 3 or less. The dimension of a base table data structure is determined by the number of cascading one-to-many relationships between data 00 5 components. For example, the table of Fig. 33 has a dimension of 2 since there is a one- (Ni to-many relationship between data component 3301 (Company) and data component 3302 (Dept), the latter of which in turn has a one-to-many relationship to data component 3303 (Month). If an XML tree of a higher dimension is encountered then typically sub-elements residing on depth levels higher than 2 or 3 are not expanded during the tree traversal, and are instead represented by hyperlinks in the base table data structure. As mentioned previously, preferably the user can select to view hyperlinked data in a composite data view.
Hyperlinks may also be used when a parent element comprises different types of sub-elements, more than one of which contain multiple instances of data. In this case the sub-elements are preferably represented by hyperlinks to prevent correspondence relationships across instances of the different sub-elements from being misconstrued.
Consider the XML tree in Fig. 44 and its fully expanded base table representation in Fig. 45. In the figures, the data elements 4401 Dept comprises of two different types of sub-elements, 4402 Sales and 4403 Staff, each of which occurs more than once. Due to the implied correspondence between data residing on the same rows in the table, the fully expanded table of Fig. 45 undesirably suggests that Gender and Staff values 4501 and 4502 are somehow related to Sales and Month values 4503 and 4504. In order to avoid this implication, the sub-elements 4402 Sales and 4403 Staff are preferably represented by hyperlinks, resulting in the base table data structure of Fig. 46 where each 693292 O -66o row pertains to a single entity.
Once constructed, a base table data structure is analyzed to determine whether other display types are possible. Since all of the remaining display types are essentially different methods for displaying 1 or 2-dimensional data, the data contained in the base 00 r 5 table data structure must be of the same number of dimensions, otherwise the remaining (Ni display types are not possible. To assist the generation of these display types, the data in N the base table data structure needs to be reorganised into a more suitable format, called a flat data table. A flat data table is a data structure in which the hierarchies of a base table data structure have been collapsed and represented as one-to-one relationships. This is possible when the base table structure has few data components comprising primarily n data components that have one-to-one relationships with one another, and is achieved by promoting one of these data components to be column headings, and populating the cells of the table with the values of the remaining data components. The flat data table is the required data structure for graphical data. Typically, n 2 and hence each cell in the resulting table contains a single value. When n 3 or more, each cell contains multiple values and the table is said to be an extendedflat data table. Extended flat data tables with n 3 are typically used for the xy plot display type, and for bar and line graphs with two distinct y axes.
The process of constructing a flat data table begins by identifying n multiplyoccurring data components in the base table data structure that have a one-to-one correspondence with one another, where n 2 or 3. One, referred to as the label component, serves as the column headings of the data table, whilst the others, referred to as the value components, serve as the contents of the data cells in the table. Whilst there are no restrictions on the value components, the label component should preferably not 693292 O 67- O contain duplicated data since it is used, for example, to label the x-axis in a bar or line graph, where duplicated labels are generally not allowed. A second condition on the label component is that it should preferably contain text data. The rationale for this is that, should not all n data components contain numerical data, using the text component as 00 labels frees the other possibly numerical data components for the contents of the data Ntable, thereby allowing graphical display types to be generated.
A different set of conditions is applied if there exists another multiply-occurring data component that has a one-to-many correspondence to the first n data components.
The presence of this additional data component, referred to as the series label component, indicates that the first n data components comprise distinct subsets. The label component should then preferably comprise distinct data values within each subset, whilst the sets of data across individual subsets must preferably be identical or substantially identical. If the above conditions are satisfied, then each subset of the value components makes up a single row of cells in the flat data table, and the series label component makes up the row heading column of the flat data table.
If another singly-occurring data component is present in the base table data structure, then it may act as a caption for the flat data table. On the other hand, if another multiply-occurring data component exists, then it generally cannot be accommodated in the flat data table. This is because the flat data table is already fully populated with all the data needed to generate its associated display types, with no further slots remaining.
Since the aim of the presentation process is to select displays that are most appropriate for showing all or substantially all of the data that is present, the preferred option is to revert to either a table display type (using the base table data structure) or a tree display type rather than showing only part of the data.
693292 O -68- SA flow chart of the procedure for constructing a flat data table is shown in d Fig. 47A, with item 4715 in that figure being shown in detail in Fig. 47B. Fig. 47A depicts a method 4700 which is preferably performed as part of the data browsing application 120 and in which an initial step 4705 operates to identify n multiply-occurring 00 5 data components di each having a one-to-one correspondence with one another.
Step 4710 then checks how many such data components exist and, if zero, one, or more than three, step 4735 follows and construction of a flat data table is not possible. If the number of data components is 2 or 3, then step 4715 follows where one of the data components di is selected as a label component. Also, a series label component s, if such exist, is identified. In step 4720, which follows, the remaining data components di are assigned as value components. Step 4725 then tests if there exists a multiply-occurring data component other than d or s. Such a data component must not have a one-to-one correspondence with data components di, otherwise it would have been identified in step 4710 among these di. If so, then step 4735 operates to halt construction of a flat data table. If not, step 4730 follows to test if the number of data components is 2. If so, step 4740 follows and a non-extended flat data table is constructed. If not, step 4730 passes control to step 4745 where an extended flat data table is constructed.
Fig. 47B shows the detail of step 4715, which has an entry point 4750. Step 4752 performs a check of whether there is a multiply occurring data component s with a one-tomany correspondence with di. The procedure then effectively divides into two branches, one including steps 4754 4768, and the other including steps 4772 4778.
Step 4754 divides each di into subsets, each corresponding to a single value of s, and s is made the series label component. The remaining steps in this branch each perform a test for which a positive response (ie. yes) transfers control to step 4770, 693292 -69o whereas a negative response (ie. no) transfers control to the next test in the branch and d ultimately, step 4780. Step 4756 tests if there is a data component di with text values that are unique within each subset and which are substantially identical across all subsets.
Step 4758 tests if there is a data component di with values that are unique within each 00 subset and are substantially identical across all subsets. Step 4760 tests if there is a data Scomponent di with values that are unique within each subset. Step 4756 is not redundant N since step 4756 tests for 3 conditions that must be simultaneously true within the same data component di, whereas in step 4758 only 2 of these conditions need to be true, and in step 4760 only 1 of these conditions.needs to be true. Effectively, this approach first looks for a data component di satisfying all 3 conditions. If none exists then such is tried again but only testing for 2 of the 3 conditions, and so on. Step 4762 tests if there is a data component di with values that are substantially identical across the subsets.
Step 4764 test if there is a data component di with text. Finally, step 4768 tests if there is a data component di with monotonically increasing or decreasing numerical values within each subset.
In the branch of Fig. 47B including steps 4772 4778, a further series of tests are performed for each of which a positive (ie. yes) response transfers control to step 4770, and a negative (ie. no) response transfers control to the next step and ultimately step 4780. Step 4772 tests if there is a data component di with unique values and text values. Step 4774 tests if there is a data component di with unique values. Step 4776 tests if there is a data component di with text values. Finally, step 4778 tests if there is a data component di with monotonically increasing or decreasing numerical values.
If one of the above tests responds positively, step 4770 follows to select the leftmost such data component di satisfying the test as the label component. In contrast, 693292 O o step 4780 which occurs if all tests in each branch are negative, records that a label component does not exist. Step 4782 follows from steps 4780 and 4770 and returns program control to the source.
If the creation of the flat data table from Figs. 47A and 47B is unsuccessful, then 00 all graphical and xy plot display types are excluded from the list of possible display N candidates for the elimination phase 3170. Only the tree and table display types are O included in the list, with the latter being based on the base table data structure. If the procedure of Fig. 47A results in an extended flat data table, then the bar, line, and xy plot display types are included together with the table and tree display types in the list of possible display candidates. The preferred table display will use the base table data structure, and the bar and line graph display types will have 2 distinct y axes. For the xy plot display type, the two value components will play the roles of the x and y components, whilst the label component will assume the role of the point label component. If on the other hand the procedure of Fig. 47A results in a non-extended flat data table, then the pie graph display type is also included in the list, whilst the columnwise xy plot display type is excluded. The preferred table display will use the flat data table structure, and the bar and line graph display types will have only one y-axis. The xy plot display type will have no point label component, and the label component and the sole value component will play the roles of the x and y components in the scatter plot respectively.
The 2D grid display type places a different requirement on the format of the base table data structure. There is no restriction on the number of data components present.
However, all multiply-occurring data components must have a one-to-one correspondence relationship with one another. If the condition is satisfied, then the 2D grid display type 693292 O -71- U is included in the list of possible display candidates for the elimination phase 3170, otherwise it is excluded. Clearly, this data pattern is also suitable for the table display type which is based on the base table data structure. The preference rules (see Section 5.4) operate to order these display types appropriately.
00 When the data being displayed is small and/or can be quickly accessed, all the Sdata is preferably examined in the analysis phase 3160. However, in a typical application environment where data may be obtained from multiple different data sources and accessible over slow network connections, it is preferable that the ordering of display types proceed without waiting for all data elements to be available, so that a display can be generated and presented to the user without noticeable delays. Consequently, when it is not possible to examine all the data within a short duration, only a limited subset is analysed before the analysis phase 3160 terminates. In the preferred arrangement, if a predetermined percentage subset of the data has been examined within a predetermined time period, then the data components identified and denoted by columns in the partially constructed base table data structure are assumed to represent all the data components present in the XML tree. The relationships between data components detected in the partial base table data structure at this point, whether they'd be one-to-one, one-to-many, or many-to-one, are also assumed to hold true in the unseen data.
With the above assumptions, the analysis of the base table data structure and the subsequent construction of the flat data table are performed as described earlier. As more data becomes available, tests are performed to determine if the assumptions are violated and if so, the display selection process terminates with a display list comprising of a single tree display type. The assumptions are violated if, for example, new and significant data components are detected in the newly seen data, or if multiple instances 693292 O -72o of a sub-element representing one data component are detected within a parent element representing a second data component when it had been assumed that there is a one-toone correspondence between the two data components. Data components are typically considered to be significant if they are multiply-occurring data, or if there are a 00 substantial number of singly-occurring data. On the other hand, if a predetermined subset (Ni has not been examined within the predetermined time period, then the remaining data is assumed not to follow similar patterns and the process terminates immediately with a display list comprising of a single tree display type. In an alternative arrangement, the last condition is omitted and the remaining data is assumed to follow similar patterns to the already examined data, regardless of the relative amount of data not yet examined.
In addition to the actual data, schema information describing the structure and nature of the data contents is often available. When working with XML data as described in the preferred arrangement, schema information is preferably expressed in the form of XML Schemas. An XML Schema document contains definitions for each of a collection of elements. Each definition specifies the allowable attributes, sub-elements and the cardinality and order of the sub-elements.
Schemas are a useful source of information in the construction of the base table data structure and also the flat data table, since they often allow the presence of data components and their inter-relationships to be deduced without the need to examine actual data. They are especially useful when the data to be analysed is large and contains many repeated elements, since these repeated elements are described by a single schema element and hence a quick examination of the latter is usually sufficient to deduce their contents.
Occasionally, a schema may not contain sufficient information and an inspection 693292 S- 73- O of the actual data is necessary. For example, if the schema indicates that a certain data element or attribute is optional, then the actual data needs to be examined to determine
O
whether the element or attribute is present. Also a schema definition may allow an element to have any elements as part of its content. In such cases schemas are still useful 00 r 5 because they can help to pin point which parts of the data need to be examined.
Apart from structural information, schemas may also contain information on the type of data stored in each data element. A schema can be used, for example, to determine whether each data element is numerical or not, and if so, obtain its associated unit (if any). For XML data, the data type associated with each attribute or text value of an element is specified in the schema definition of the element.
To facilitate their use in the elimination phase 3170, in addition to the actual data, the flat data table constructed in the analysis phase 3160 also stores schema information on data types for items whose actual data are not yet available. Where the schema definition for a data component is not available or its data type can not be determined, a generic text string data type is assumed and stored in the table. This indicates that nothing is known about the item, and hence an examination of the actual data is needed to determine its data type.
A flowchart of the analysis phase 3160, incorporating both schema and data analysis is given in Fig. 48. The method of Fig. 48 commences with a program entry point 4802 followed by step 4804 which determines if a schema is available. If so, step 4806 examines the schema to determine if it contains sufficient information to identify all data components that are present in the XML data. If not, step 4808 follows to examine the data where necessary. As mentioned earlier, if an element or attribute is declared as optional in the schema for example, then there is insufficient information to 693292 74 o determine whether that element or attribute is actually present in the data, and hence an examination of its expected location in the data is necessary. Where step 4804 finds no schema available, step 4810 follows to examine a subset of data. The subset of data ,selected for examination is typically either determined randomly or on a first-come-first r- 00 served basis. Its size is governed by the amount of data that can be processed within (Ni some pre-determined time duration.
SEach of steps 4810, 4806 and 4808 returns control to step 4812 where a base table data structure is constructed. Step 4814 follows to assess if a flat data table can be constructed form the base table. If so, step 4816 follows to construct a flat data table which includes bar, line, pie graphs and xy plots in the list of display candidates. If not, step 4818 follows which excludes bar, line, pie graphs and xy plots from the candidate list. Step 4820 follows each of steps 4816 and 4818. Step 4820 tests if all multiplyoccurring data components have a one-to-one correspondence with one another. If so, step 4822 follows and a 2-dimensional grid is included in the candidate list. If not, step 4824 is performed where a 2-dimensional grid is excluded from the candidate list.
The method 3160 then ends at step 4826.
5.3 Elimination Phase A key factor in determining whether a graphical presentation such as a graph or xy plot is possible is the type of data being displayed, in particular, whether they contain numerical values and if so, their associated units of measurement, such as length, temperature or currency. Only numerical data with compatible units can be shown as graphs or plots. Others can only be shown as tables or trees. In the remaining of the present document, the term "numerical data" will be used to denote a data item comprising of a numerical value, or a numerical value with an associated unit.
693292 O 75 U The elimination phase 3170 applies criteria such as these in order to eliminate non-appropriate display types. To achieve a modular design, the criteria are preferably expressed in the form of elimination rules in the present arrangement. Each elimination rule is independent of every other rule and hence can be modified, added or removed 00 r 5 without affecting other rules. Because the elimination phase 3170 is concerned with the N elimination of various graphical display types, the processing is based on the flat data table. This table is obtained using a set of pointers into the base table data structure (i.e.
the data is not duplicated).
Each elimination rule has associated with it a set of display types that are eliminated from the list of all possible candidates once certain conditions or tests are satisfied. The evaluation of each rule can return one of three possible values: the tests succeed, in which case display types can be eliminated; (ii) the tests fail, in which case the display types are not eliminated (they may still be eliminated due to other elimination rules); or (iii) there is insufficient information to determine the outcome of the tests, in which case the rule must be executed again when new data become available.
In the first two cases, the rule is said to have processed successfully and need not be processed again. An additional test is performed prior to the processing of a rule. A check is made to determine whether at least one of a set of display types associated with a rule is among the remaining candidates. If so then the rule is processed, otherwise it is irrelevant and is hence deleted.
The use of three possible return values from each elimination rule allows the data browsing application 120 to operate without the need for all data to be present. Each time a new data item or items become available, the set of elimination rules are processed, 693292 -76- O possibly resulting in certain display types being eliminated from consideration. When all Sbut the tree display type have been eliminated (recall that the tree display type is always possible), or when all rules have either been successfully processed or deleted, no further data need to be examined.
00 5 To facilitate the evaluation of rules, the base table data structure (and thus its (Ni derived flat data table) is updated as more and more data become available. When Sprocessed, an elimination rule operates on the contents of the partial flat data table current at the time of firing. The preferred list of elimination rules is given in Table 3. Fewer or additional rules may also be used. The column "Candidates for elimination" in Table 3 identifies display types that are eliminated if the condition under the corresponding "Condition" column is true. Here, the term "graphs" refer collectively to bar, line and pie graphs.
Table 3. Elimination rules.
Candidates for Rule Condition Comments elimination One or more values of a data all graphs and 1 component are non-numerical, or xy plots xy plots do not contain compatible units.
column-wise Flat data table contains only a A data series must graphs and single row of data cells, have more than 1 data 2 column-wise xy point.
plot Flat data table is in non-"extended" The column headers format and all column headers do constitute the x- 3 row-wise xy plot not have numerical values with coordinate values and compatible associated units. hence must have 693292 -77compatible units.
Values of the label component Data series in a graph correspond to each value of the can have at most 1 y- 4 row-wise graphs series label component are not value corresponding unique. to each x-axis label.
Values of the series label Data series in a graph column-wise component are not unique, can have at most 1 ygraphs value corresponding to each x-axis label.
The number of cells in the flat data This is the number of 6 bar graphs 6 bar graphs table is too large. bars in the graph.
The number of columns in the flat This is the number of row-wise pie row-wise pie data table is too large, segments in a pie graph chart.
The number of rows in the flat data This is the number of column-wise pie column-wise pie table is too large. segments in a pie graph chart.
As in the analysis phase 3160, schema information, if available, can be used to reduce the amount of actual data that needs to be examined. Recall that the flat data table constructed in the analysis phase 3160 also contains information on the data types of unseen data items. This information is used in the execution of each elimination rule, in addition to the actual data already present in the data table. For example, when executing Rule 1 in Table 3 which tests for the presence of non-numerical data items, if the schema information associated with an unseen item indicates that it has a non-numerical data type, then the test succeeds immediately without waiting for the item to become available.
Alternatively, if the schemas associated with all unseen data items indicate that they are all of numerical data types, then the test fails, again without waiting for any of these items to become available.
693292 O -78o As has already been mentioned, when data is accessible over slow network d connections, it is preferable that the ordering of display types proceed without waiting for all data to be available. Whilst the use of schema information can help in alleviating the need to examine all data, it may not always be available or sufficiently effective.
00 Consequently, the time allocated for the elimination phase 3170 is typically limited to a short duration. If this duration lapses and the elimination phase 3170 has not been completed, it is terminated prematurely, and the list of candidates remaining at the time is taken as the list of possible display candidates to be used in the next preference phase.
A flow-chart of the elimination phase 3170 is given in Fig. 49. After an initial entry point 4902, step 4905, operates a timer for the elimination phase. If the allotted time has elapsed, control passes to step 4930 where the elimination phase 3170 ends. If not, step 4910 detects whether or not one or more data items have become available. If not, control returns to check the timer at step 4905. If so, step 4915 follows which selects an elimination rule. Once a rule is selected, step 4920 follows to execute the selected rule, this being shown in detail in Fig. 50. Step 4925 follows which tests if all elimination rules have been processed and if so, the elimination phase ends at step 4930. If not, control returns to step 4915 to select a yet unprocessed rule.
The process depicted by step 4920 is shown in detail in Fig. 50 which has an entry point 5000. Step 5002 follows which tests if the rule has been successfully processed. If so, the process 4920 concludes at step 5016. If not, step 5004 follows to determine if the selected rule's elimination candidates have been removed. Again, if so, the process 4920 concludes at step 5016. If not, step 5006 operates to evaluate the selected rule's condition. Step 5008 follows to test if the rule's condition is true. If so, step 5010 then removes the rule's elimination candidates from the list of display candidates. If not, 693292 -79- U step 5014 tests if the rule's condition is unknown. If so, the process 4920 concludes at step 5016. If not, step 5012, which also follows step 5010, is implemented which marks the rule as having been successfully processed. The process 4920 then concludes at step 5016.
00 5 5.4 Preference Phase At the completion of the elimination phase 3170, it is possible that more than one N display type remain in the list of possible display types. If this is the case then a third phase, the preference phase 3180, begins to rank the remaining candidates in descending order of preferences. At the completion of this phase 3180, the top candidate in the ordered list is presented to the user. The remaining ordered list of candidates is also presented to the user, giving the user the option of selecting alternative display types the data.
The criteria used for ranking the list of display candidates are preferably expressed as a set of preference rules. As in the elimination phase 3170, the preference rules are modular in nature and hence can be modified or deleted without affecting the behaviour of other rules. Likewise, new rules can be added to the system without a need for modifying existing rules. In contrast, existing approaches for selecting among display types employ fixed, pre-determined sequences of tests that are not readily modifiable.
In the present arrangement, a preference rule compares a pair of display candidates and produces one of three possible outcomes: the first candidate is preferred over the second candidate, (ii) the second is preferred over the first, or (iii) there is no preferred choice among the pair. Restricting the scope of each rule to just a pair of candidates in this way lead to simpler rules since considerations need not be given to other candidates. A list of preference rules is given in Table 4. The column "A preferred 693292 over B" gives the condition that must be true for display type A to be preferred over display type B, and similarly for the column "B preferred over Fewer or additional rules are also possible. Rule 3 states that any other display type is preferred over the tree type.
Table 4. Preference rules.
Display types Rule splay yp A preferred over B B preferred over A (A v. B) x-axis labels represent x-axis labels represent continuous non-continuous quantities quantities and have regular or have non-regular intervals (eg. year, month).
bar v. line intervals (eg. geographical regions, departmental names).
row-wise Flat data table has fewer Flat data table has fewer columns table/bar/line rows than columns and the than rows and the number of rows graph number of columns is not is not too large.
2 v. too large.
column-wise table/bar/line graph 3 tree v. other False True row-wise pie Flat data table has 1 row. False 4 v.
bar/line col-wise pie Flat data table has 1 False v. column.
bar/line 2D grid One or data component No data component has pictorial 6 v. has pictorial contents, contents.
693292 -81 table/bar/line/ pie/xy plot table v. False True 7 others except tree A difficulty with using a modular set of preference rules as described above is that it can lead to conflicting results. This can occur in a couple of ways. Firstly, rules comparing the same pair of candidates may produce different outcomes. Secondly, rules comparing different pairs of candidates, when considered together, may lead to ambiguous preference relations. As an example of the latter, consider the case where there are three display candidates a, b and c. Suppose that one preference rule prefers a over b, another prefers b over c, and yet a third rule prefers c over a. In this scenario, the preference relations among the candidates a, b and c are ambiguous.
The first problem is avoided by employing at most one preference rule for each distinct pair of display candidates. The second problem on the other hand, can not be avoided without placing carefully crafted inter-rule constraints and dependencies, which would destroy the desirable modular nature of the system. The preferred approach is to incorporate some means for resolving the ambiguities. For simplicity reasons, rather than employing elaborate conflict resolution methods, the described arrangement addresses the problem by simply ignoring those preference relations that are ambiguous, and generates an arbitrary ordering among the display candidates that correspond with those results.
The presence of such ambiguities is detected by representing the display candidates and their preference relations as a directed graph. Each node in the graph is a display candidate, and the directed links between nodes represent the outcomes of the preference rules. In particular, if a rule prefers a first display candidate over a second 693292 O -82o display candidate then a link is created originating from the node corresponding to the Sfirst display candidate and terminating at the node denoting the second candidate. When no rule exists for a pair of candidates or when a rule exists but produces no preference between the pair, no direct link is created between the corresponding nodes in the directed 00 5 graph.
N With the above directed graph representation, ambiguous preference relations give rise to directed cycles. An example of a directed graph representation of display types with ambiguous preference relations is shown in Fig. 51. The ambiguity is evidently depicted by the directed cyclic paths between the display types "column-wise bar graph", "row-wise pie graph", and "row-wise bar graph".
Directed cycles in directed graphs can be detected by identifying "stronglyconnected components" using well-established algorithms, such as those described in "Algorithms", R. Sedgewick, 2 nd Ed., Addison-Wesley 1989. A strongly-connected component is a set of nodes in which there exists a directed path from each node to every other node in the set. Once such a component is found, the ambiguities are removed by deleting links between every pair of nodes in the set. Fig. 52 shows the result of deleting ambiguous preference links from the graph of Fig. 51.
The above directed graph representation also allows the ordering of all display candidates to be easily obtained using well-established "topological sorting" algorithms, described in the text referred to above. These algorithms produce an ordering of the nodes in such a way that if there exists an undeleted link originating from a first node to a second node, then the first node will appear before the second node in the ordered list.
An example of such an ordering obtained for the graph of Fig. 52 is {col-wise bar, colwise table, row-wise table, row-wise bar, row-wise pie, col-wise pie, tree}, in order of 693292 -83o descending preference.
SA flowchart of the preference phase 3180 is given in Fig. 53. After an entry point 5300, step 5302 creates a node for each candidate of the desired display type.
Step 5304 then executes a preference rule. Step 5306 initially tests whether a 1 st 00 5 candidate is preferred over a 2 nd candidate. If so, step 5312 then creates a link from the
N
1 St to the 2 nd candidate. If not, step 5308 tests if the 2 nd candidate is preferred over the 1 st Scandidate. If so, step 5310 then creates a link from the 2 nd to the 1 S t candidate. If not, indicating there is no preference relationship between the 1 st and 2 nd candidates, step 5314, which also follows steps 5312 and 5310, executes another preference rule.
Step 5316 checks to see if all preference rules have been executed. If not control is returned to step 5306. If so, step 5318 follows and operates to identify strongly connected components, using well-established algorithms such as those described in "Algorithms", R. Sedgewick, 2 nd Ed., Addison-Wesley 1989. Step 5320 then removes all links between candidates within each connected component, and step 5322 orders the candidates, preferably using a topological sorting procedure. The preference phase 3180 then concludes at step 5324.
Preferably the preference rules can adapt to user preferences using feedback from the GUI. For example, a particular user may dislike tables with many columns and repeatably transpose such tables. The preference rules could therefore modify their optimal number of columns for the table (see Rule 2 in Table 4).
6. Creating New Data Views Section 2 describes how new data views can be created using a schema view. In this mode, a user may simply select the data sources that were required for the data view, and the data browsing application 120 can then generate a schema view which 693292 O -84- O incorporates all the mappings which are relevant to the selected data sources. A GUI can d Sthen be provided Figs. 7A-7C) which allows the user to select the data components required for the new data view, specify the data components which represent essentially the same information in different data sources, and specify any constraints that would 00 control what data appears in the data view where Salary $100,000 and Age SIn addition, the user must specify how data components in different sources, which represent the same information, can effectively join data sources.
The term '"join" is used by existing relational database management systems (RDBMSs) to effectively combine or join information from more than one table. Usually such a join requires the expression of a congruence condition. For example, the following simple SQL statement effects a join between tables tl and t2 based on the congruence condition, tl.id t2.id: select from tl,t2 where tl .id t2.id The generator of this SQL expression must have had prior knowledge that the id columns of tables tl and t2 had the same data. Similar congruence conditions can also exist between data components of different data sources and be used to create data views across the different data sources.
Now described is a preferred graphical method for creating new data views from existing data views. This method allows users to integrate the processes of creating a new data view and creating required mappings in a single graphical process, which is datadriven rather than schema-driven as described in Section 2. In this mode, it is not necessary for the data browsing application 120 to generate a schema view and thus the user does not need to understand the existence of a schema for data sources. Also because the user works with actual data, problems that may be associated with correctly 693292 o understanding what the names of data components mean, are reduced. Furthermore, the Sexisting data views can bring implicit knowledge about joins between data sources of which the user may not be aware. Indeed, the user can create new data views using this method without even being aware of join, or congruence relationships, that others may 00 S 5 have established.
(Ni N In this method, the user works in the GUI environment 1200 displayed by the data Sbrowsing application 120 upon the video display 1114 as depicted in Fig. 12A and substantially as described in Section 5. The GUI 1200 allows the user to modify the data views displayed in the workspace 1202. For example, the user can modify presentation properties fonts, styles, colours, etc.), apply filters, change the sort order, hide or rename data components, specify and apply transformations/combinations that may apply to one or more data components, etc.
In the preferred arrangement, each data view is associated with an XQuery expression. XQuery (see http://www.w3.org/XML/Ouer), or XML Query, is a query language which can be used to express queries across various forms of data, whether physically stored in XML or viewed as XML via middleware. XQuery Version 1.0 is an extension of XPath Version 2.0. Any expression that is syntactically valid and executes successfully in both XPath 2.0 and XQuery 1.0 will return the same result in both languages. A module that executes XQuery expressions is called an XQuery processor.
XQuery is the preferred query language because of its ability to address relational and hierarchical data sources. Clearly, other query languages with similar capabilities could also be used.
The basic building block of XQuery is the expression. Path (XPath) expressions are used to locate nodes within a tree whereas flwor expressions are used for iteration and 693292 O -86- Sfor binding variables to intermediate results. The latter kind of expression is often useful a) Sfor representing joins between two or more data sources and for restructuring data. The name fiwor, stands for the keywords for, let, where, order by and return, the five clauses found in a flwor expression. Other expressions, which represent sequences and logical 0O 5 combinations of these basic expressions, are also permitted.
Ni, The method of creating new data views using existing query data will now be Sdescribed with reference to Fig. 54. This method is described with respect to data components, however it should be clear that data components also represent data nodes, data sets and data series as defined in Section 2. The term data component is used as a generalisation of these terms.
Fig. 54 shows a method 5400 which is preferably implemented as a module of the data browsing application 120. The method 5400 commences at step 5405 where existing data views are displayed in the workspace 1202. This step permits the user to select one or more required existing data views. These data views may arise from selection of data sources or previously created data view definitions via the datamarks panel 1205 of Fig.
12A as described in Section 5. The selected data views may utilise any of the implemented display types as described in Section The user then indicates that he/she wants to create a new data view in the workspace 1202. In the preferred arrangement the user can do this in one of two ways.
First, the user can select the New Data View option from the contextual menu 1292 for the workspace 1202. The contextual menu 1292 may be displayed by right clicking the mouse 1103 somewhere in the whitespace of the workspace 1202, as depicted in phantom in Fig. 12A. The data browsing application 120 then, according to step 5410, presents the user with a list of possible display types for the new data view and the user can select a 693292 O -87- Spreferred display type from this list. So, for example, two existing data views may be d Spresented in the workspace 1202 using a table and a bar chart display type, respectively.
The user may select to create a new data view with a display type of a line graph. This action results in the default template for the selected display type being displayed in the 00 S 5 workspace 1202 as the new data view in step 5415. The initial size and position of this (Ni N data view are assigned as described in Section 5. The data browsing application 120 also Sinitialises the XQuery expression associated with the new data view.
In the second way, the user can select one or more data components from the one or more existing displayed data views and copy or drag the data components to an unused location in the workspace 1202. On dropping or pasting the data components, a new data view is created at the drop or paste location. This data view has a display type that is consistent with the display type(s) of the existing data component(s). For example, if a data component were dragged to the workspace 1202 in such a way that it acted as the xaxis of a line-graph in the existing data view, then the new data view would be a line graph. The created data view would be displayed using the default template for a line graph with the dragged component acting as the x-axis.
If, however, two data components had been copied and pasted to a location in the workspace 1202, one from a line graph and one from a table, the new display type is that having the least constraints table). If more than one data component is used to initialise a data view, then the checks performed in step 5425 (and described below) are also performed before the new data view is created in the workspace.
Following from step 5415, in step 5420 the user can select to copy one or more data components from the existing one or more data views in the workspace 1202 to act with a specified role in the new data view. The role is indicated by the selected target 693292 -88o position of the paste or drop in the new data view. For example, if the user pastes a d copied data set onto the x-axis of a line-graph then it is assumed that the user wishes that data set to act as the x-axis for the graph. Similarly, if the user pastes a data set onto a particular column of a table, then it is assumed that the data set should replace that 00 5 column of the table it should assume the role of that particular column of a table).
(Ni SPreferably menu options also provide the user with the options of inserting before and after the selected table column(s).
If more than one data component has been copied, then the indicated role in the new data view must be able to support more than one data component. For example, in the preferred arrangement, graph (line or bar) templates support more than one y-axis data set but only a single x-axis data set. Alternative arrangements could permit multiple x axes and thus have templates which support this feature. Similarly, a table can support multiple columns, whereas a pie chart template may support one or more individual pies (each visualising a single data series). In other words, the possible roles for a new data view depend on the template used to create the data view. If the indicated role in the new data view does not support multiple data components, then an error is generated in 5430 as described below.
The copy can be done in one of two ways. First the user can copy (or cut) the data component from its existing data view and then paste it in the new data view.
Second the user can select a data component and drag it into the new data view.
Preferably a shadow of the dragged column is shown during the drag. The role of the copied data can be indicated by a data component drop target such as an x-axis of a graph) or separator drop target border between two columns). In the latter case the dragged data component is inserted at the border. Drag operations between data views 693292 O 89o result in the dragged data component being copied. Drag operations within a data view are also allowed but in these cases the dragged data component is moved from its original place to the target place.
SBefore a data component is added to the data view a check is performed in
OO
0 5 step 5425 to ascertain whether the data component is compatible with its indicated role in the new data view. In other words, the data manipulation indicated by the user must be consistent with the semantics of the display type. This is described in Section 5. This NsI means, that if a user dragged a data set to a table data view, and this data set was not able to be joined to other data sets already in the table, the drag would be disallowed. This is because a table typically only makes sense if the data of a row of a table relates to a single entity. An error message would be presented to the user in step 5430 to describe the reason for not allowing the attempted drag and the process would continue at step 5440.
Similarly, an attempt to drag a non-numeric data set to act as a "y-axis" of a bar chart would also be disallowed.
The data browsing application 120 can ascertain whether attempted data manipulations are allowable by examining both the queries associated with the existing data views and the data specifications associated with the manipulated data components.
The data specifications are formed as part of the display type decision process described in Section 5 and provide the means to connect manipulated data with their corresponding specifications in queries. The existing data views effectively act as sources of data for the new data view. The data browsing application 120 can also make use of its own stored knowledge of known congruences (joins). It persistently maintains such knowledge.
If it was ascertained in step 5425 that the attempted data manipulation was allowable, then the data component is added to the displayed new data view in step 5435.
693292 SAlso the XQuery associated with the new data view is updated. This means that the user can select to save a data view at any time as its associated query will always be consistent with the displayed data. If further data components are to be copied to the new data view in check box 5440 then the process returns to step 5420.
00 5 The process of Fig. 54 will now be described in detail with respect to the N following example of creating a bar chart from a set of existing data views. Suppose that Sthe user wishes to compile a chart showing how well each project in his/her company has performed with respect to filing a target number of patents for a particular year. The company has a data source, ProjectsDB, which can be browsed via a data server using the data browsing application 120. This data source contains details of all the company's projects. Its structure can be represented as follows: ProjectsDB Year Project Code Name Description Budget Manager PatentEstimate ProjectResources ProjectCode EmployeelD 693292 O -91o PersonMonths a) The user preferably has recorded the following join information in his/her data browsing application 120: 00 ProjectsDB/Year/Project/Code SProjectsDB/Year/ProjectResources/ProjectCode To display information about the company's projects in a data view, the user can select the ProjectsDB datamark 1210 in the datamarks panel 1205 as shown in Fig. 12A.
Initially this would preferably display a link for each year for which project data has been recorded. The user can select the year of interest 2002). This results in the data view being updated to show now two further links, one for Project and one for ProjectResources. If the user selected the Project link, then a data view as shown in the top left hand corer of Fig. 12.A, would be displayed in the workspace 1202 (some column data is not shown). The open location control 1208 displays the query associated with the currently selected data view as a URI. In this case, the XQuery expression is a path expression.
To limit this data to show just those projects managed by "Joe Brown", the user could select the Manager column and specify a filter constraint for that column (e.g.
Manager "Joe Brown"). Immediately the data in that data view would be restricted to just those projects managed by "Joe Brown" in the selected year. This filtering operation is not necessary for the current task. Filter operations are described in more detail in Section 7.2 693292 0- 92- 0 The filter constraints specified for the Project data view are recorded by the data browsing application 120. If the user selected to save that data view for re-use, this filter
O
constraint would be integrated into the query for the data view. For example, its associated XQuery would be: 00 XQuery Example 1 let$projects document("http://www.example.com/Projects?/ProjectsDB") for $p in $projects/Year[.=2002]/Project where $p/Manager "Joe Brown" return $p In XQuery Example 1, the process identified by http://www.example.com/Proiects represents a data server, and the expression /ProjectsDB following the question mark represents the query for the data server.
The application of a filter in this example has resulted in the XQuery expression being changed from a path expression to a flwor expression. Preferably filters are expressed using the where clause of the flwor expression. This process is described further in Section 7.2. Alternative arrangements may preserve the path expression and apply the filter in the form of predicates.
Using the same workspace 1202, the user may then select to display the resources required for these projects. To do this, the user once again selects the ProjectsDB datamark 1210, the desired year, and this time follows the ProjectResources link. This results in a table listing all the data components contained in the ProjectResources data 693292 O 93- 0 component. The data browsing application 120 automatically connects the Code of the SProject data view with the ProjectCode of the ProjectResources data view with a join
O
connector 1222 as shown in Fig. 12B. Display of the join connector 1222 is possible because of the known congruence of these two data components. There may be a large 00 number of rows in the ProjectResources data view. Fig. 12B shows a vertical scroll bar 1220 in that data view partly scrolled to show just the data for projects having a
O
0 ProjectCode of"DLE" and "Page+".
Now, in order to complete the task, the user must obtain information about the number of patents actually filed for each project in the specified year, 2002. To achieve this, the user selects the Project Patents 2002 data view 1230 in the data view section of the datamarks panel 1205. This data view results in the display of a bar chart, as shown in Fig. 12C, using the method described in Section 5. This data view has been derived previously using the CompanyPatents datamark 1231 in the datamarks panel 1205.
This datamark corresponds to a data source that can be hierarchically represented as follows: Patents Invention ProjectCode InventionCode Year InventorNamel InventorName2 InventorName3 693292 94- UI nventorName4 DateFiled Abstract 00 The XQuery associated with the Project Patents 2002 data view is as follows: XQuery Example 2 let $Patents document("http://www.examle.com/Patenlts?/Patenlts") for $p in d i sti nct-valIu es($ pate nts/I nve ntion [Yea r2002]/Proj ectCod e/texto) let $inv $patents/invention[ ProjectCode $p and DateFiled date("2002-01 -01 and DateFiled date("2002-12-31")
I
return <Project> <ProjectCode> <ProjectCode> <PatentsFiled> {$inv/countO} </PatentsFiled> </Project> The process identified by the URI, http://www.example.com/Patent represents a data server dedicated to the Patents data source.
693292 O O This query first extracts all the distinct ProjectCode values, and then for each one the query instructs a list of inventions, that were filed during 2002, to be obtained. The number of elements in this list can be counted using the XQuery count( function. The query returns a list of Project elements. Each Project element has a ProjectCode 00 element with content derived from variable $p and a PatentsFiled element with content that has been derived from applying the count() function to the $inv variable (that holds a 0 list of all the Invention elements that satisfy the letAssignment clause of the XQuery).
Preferably, the user has also recorded the following join information in his/her data browsing application 120: ProjectsDB/Year/Project/Code Patents/I nvention/ProjectCode Joins can be registered by a user by selecting two data sets in the workspace 1202, and then selecting the Join icon 1232 on the toolbar 1207. This action results in a join being stored by the data browsing application 120 for the selected data components.
Consistent with step 5410, the user now right-clicks the whitespace of the workspace 1202 and selects the option New Data View from the displayed contextual menu 1292. The user then selects "Bar Chart" from the presented list of display types and the default template for a bar chart is displayed in the next available grid unit in the workspace as shown by Fig. 12D. The template has no data initially, just slots for data components x-axis slot 1252) and text objects the Title slot 1250). Preferably the text slots are differentiated from the data component slots by shading. In Fig. 12D, text slots are differentiated by a dashed border 1250). Preferably the template also 693292 O -96o shows an example of the bars 1258 that will be generated once data is specified for the new data view.
To establish an x-axis for the new data view, the user can copy the ProjectCode data component from the Project Patents 2002 data view and paste this data component 00 in the slot reserved for the x-axis, 1252. Alternatively, the user can drag this data (Ni Ncomponent to slot 1252. Immediately labels for the x-axis are displayed representing the Sprojects for the year 2002. Preferably, when a data set is copied, any predicates implied by the data set's iteration operation are maintained Year 2002; see XQuery Example 2).
The user can then copy the PatentEstimate data set from the Project data view by selecting the relevant column in the table, pressing CTRL C on the keyboard 1102 and then pasting this data set on the left-hand y-axis of the new data view. This indicates to the data browsing application 120 that the pasted data set is to act as a y-axis with respect to the selected x-axis and hence be dependent on that axis. This is an allowable manipulation because the data browsing application 120 knows that the Code data component of the Project data view is joined to the ProjectCode data component from the Project Patents 2002 data view, and that the Code and PatentEstimate data components of the Project data view have a point-wise correspondence.
If a copied data set must correspond with other data sets in the receiving data view as in a table data view) then the step 5425 of Fig. 54 assesses whether this is possible. In the simplest case, the copied data set may share the same parent node as the other data sets in the data view and thus the current iterator for the table data view is unchanged by the addition of another data set (eg. column of the table). However, in other cases the data set may be able to be copied because there exists some join condition 693292 O -97- 0 involving its iterator and the existing iterator of the table. Although the join condition implies that the two iterators can be unified, it does not imply that there is a one-to-one correspondence of the data.
For example, the ProjectsDB data source may have a record of all the projects, 00 however the Project Patents 2002 data view may contain a subset of these projects (i.e.
N only those projects for which patents have been filed). Therefore, if the user selected to Scopy the PatentsFiled data set from the y-axis of the Project Patents 2002 data view to a new column of the displayed Project data view in Fig. 12C, this manipulation would be allowed because of the join condition between ProjectsDB/Year/Project/Code and Patents/Invention/ProjectCode. However, the iterator for the new column of the table would result in a subset of the projects listed in the table not all the projects listed in the Project data view would have a corresponding value for PatentsFiled). In other words there is more than one way of presenting the joined data to the user. So, for example, should projects be listed in the updated table if they don't have a corresponding value for PatentsFiled, should new projects be added to the table if the Patents data source referenced projects that had not been stored in the ProjectsDB data source? These different options correspond to different methods of executing the join condition.
The preferred arrangement allows the user to select from the following three methods of effecting a join condition: distinct union; (ii) outer join; and (iii) inner join.
For the first distinct-union method, the data browsing application 120 generates a query that iterates through the distinct non-repeating) union of the join attribute values ProjectsDB/Year/Project/Code and Patents/Invention/ProjectCode in the above-mentioned example) and then generates a data result for each identified join attribute value. If the data is missing from one data source, then an empty or zero element 693292 -98o results. This method results in a union of data and thus a table with zero or empty cells.
SIt is useful when a user, is either unfamiliar with the data or wants to detect erroneous data. Filter operations can subsequently be applied to the data view to remove the empty or zero data.
00 For the second outer-join method, the data browsing application 120 generates a query where the added data set is obtain via a nested (inner) let or for clause in the XQuery. An inner let clause implies a one-to-one relationship between the two iterators whereas an inner for clause implies a one-to-many relationship. The nested iteration operation is predicated by the value of the join attribute value for the current outer iteration and a data result is generated for each data result of the outer iteration operation.
So in the case of the above-mentioned example, no extra rows would appear in the table however some rows may have a zero value for the PatentsFiled data component.
The final inner-join method is similar to the outer-join method, with the exception that a data result is only created if both the outer and inner iteration operations have a result. So, in the case of the above-mentioned example, rows of the table not having a value for the PatentsFiled data component would be removed from the table.
Preferably, the user can specify default join behaviour to be used for his/her browsing session, which is to be used for all join operations. This means that the user does not need to specify for each operation what type of join is required. However, the data browsing application 120 provides menu options for the user to change the join method for a particular data view. This results in the XQuery associated with the data view being changed to reflect the different patterns of iteration from using a distinct-values iteration operation to a nested forAssignment node). Effecting join operations is discussed further in Section 7.
693292 S99.- 0 Returning now to the example task, when the PatentEstimate data set was copied Sinto the new data view, the resulting data view would depend on the default join method
O
selected by the user. If a distinct-union method was used then the data on the x-axis would reflect the distinct union of the following data components; 00 S 5 ProjectsDB/Year[.=2002]/Project/ProjectCode, and S* Patents/Invention/[Year=2002]/ProjectCode.
,i This would mean that new project code values may appear on the x-axis reflecting those projects that exist in the ProjectsDB data source but not the Patents data source. These new values would have associated patent estimate values. However, some project codes may not have corresponding patent estimate values (ie. the project codes from Patents data source).
If the outer-join method was used then no new projects would appear on the x-axis, however some patent estimate values may not appear in the chart. If the inner-join method was selected then some project codes may disappear from the x-axis because they would be excluded from the query if there was not a corresponding patent estimate value.
For the remainder of this example, a distinct-union join method is assumed.
To obtain the comparison between the number of patents actually filed and those estimated for each project in 2002, the user can now select the PatentsFiled data set from the Project Patents 2002 data view. This can be achieved by either selecting the y-axis name (where there is a single data set associated with the axis) or selecting the data set name from a legend (if it is displayed). This data set can also be pasted to the lefthand y-axis of the new data view. This manipulation indicates to the data browsing application 120 that both the PatentEstimate and PatentsFiled data sets should use the same y-axis. This results in a legend being drawn for the new bar chart with 693292 O -100- O PatentEstimate and PatentsFiled being listed. The user can modify these data set names by selecting the appropriate slots and editing the contained text. So, for example, in Fig. 12E the user has edited the y-axis name to be "No of Patents" and the data set names in the legend to be "Estimate" and "Actual".
00 5 The final task for the user is to show on the bar chart the resources that were used to get this result. The user selects the PersonMonths data set from the SProjectResources data view and copies this data set to the right-hand y-axis of the new data view. This indicates to the data browsing application 120 that PersonMonths is also to be graphed with respect to the ProjectCode. This is an allowable manipulation because of the join condition between the Code and ProjectCode elements of the ProjectsDB/Year/Project and ProjectsDB/Year/ProjectResources data components, respectively. In the preferred arrangement, the data browsing application 120 assumes that the person months for each project must be summed before being copied to the new data view. In an alternative arrangement, the user may be required to specify that the ProjectResources table first be grouped by ProjectCode by summing over all employees for a project.
Immediately data is shown for this data set. This new data component is added to the legend (see Fig. 12F, where the PersonMonths data components has been renamed to "Resources"). The template can use various means to distinguish between the y axes used by the legend item. In the described example, it is assumed that colour is used. In other words, the bars for the PersonMonths data component is shown in a different colour to the PatentsEstimate and PatentsFiled data components. Alternatively, lines could be used to represent the data for a right-hand side y-axis, creating a chart having a mixture of bar and line styles.
693292 _101- O The user can then add a title and perhaps edit some of the axis names. The final result is shown in Fig. 12F. The XQuery that is generated for this result is as follows: XQuery Example 3 00 let $projects document("htp://www.example.co/Projects?/ProjectsDB") let $Patents docu ment("htp://www.example.com/Patenlts?/Patenlts") for $p distinct-values( $pate nts/I nve ntion [Yea r2002]/P rojectCod etexto, $projectsl~ear[. 2002IIProject/Codeltexto, projects! Year[. =2002]/ProjectResou rces/ProjectCode/texto, let $proj $projects/ Year[. 2002]IProject[Code=$pI let $inv $Patents/lnvention[ ProjectCode $p and Year 2002 and DateFiled date("2002-O1 -01" and DateFiled date("2002-12-31) let $res $projects/Year[. 2002/ProjectResources[ProjectCode return <Project> I <ProjectCode> </ProjectCode>} 693292 8- 102- S{ <PatentEstimate> {$proj/PatentsEstimate/text() </PatentEstimate>} <PatentsFiled>{$inv/count()}</PatentsFiled>} <PersonMonths>{sum($res/PersonMonths) }</PersonMonths> </Project> l"- 00 C The final query thus represents a join between three sources of data (project data,
O
Spatent data and project resource data). In each case the join is effected using the distinctunion join method. Performing distinct-union operations (using the distinct-values function as shown in XQuery Example 3) is generally not efficient. Alternative arrangements could reduce the processing associated with the primary iteration operation by analysing the available data. For example, if an examination of the data demonstrated that the ProjectsDB/Year/Project/Code data component contained a complete list of all the project codes, then the distinct union operation could be replaced by an iteration over the ProjectsDB/Year/Project/Code values.
The user can also apply filters to data views XQuery Example These filters can involve any data components that are returned by a query. So, for example, if a query was a path expression, filter constraints can involve any displayed descendent elements for the data view. If the query is a flwor expression, then filter constraints may involve any data components that are returned by the query (including hidden data components). In the preferred arrangement, filters are treated as a property of a data view. The user can specify whether filters of a source data view should be copied with a data component to a target data view. This is a system preference, which can be set by the user. If set then a source data view's effective filter is added to the filter of the target data view when data is copied. Filter expressions are described in more detail in Section 7.2.
693292 O -103- O The generated XQuery is included in the data view definition. The method by Swhich these XQueries can be generated is described in more detail in Section 7. The data view definition also contains presentation information and any mappings that may have been used in the construction of the data view (see Section 11). Note that the generated 00 5 query does not specify that the data must be displayed as a bar graph. When the query is subsequently executed, the presentation process described in Section 5 will determine the Sbest display type for the data. The generated query only defines the data required for the created data view. This means that if data sources involved in a query change between when the query was created and when its subsequently re-displayed, the display type used for the presentation will adapt to the data.
This graphical method of generating new data views can also be used to create new data components as a result of transformations or combinations of existing data component(s). The user can select to save these operations as new mappings that can be re-used in the future. These new mappings become part of the new data view's definition and are also saved as part of the user's mapping set. The user can select to perform a transformation or combination operation without creating a new mapping. In this case, the operation is just integrated into the XQuery which is generated for the new data view.
This process is described further in Section 7.
Fig. 13A shows a workspace region 1202, which shows a Contacts data view 1305. This data view is a table, which contains four columns consisting of the data components; SecondName, FirstName, Address and Email. In this example, the user would like to create a new data view, where the name appears as in the format "SecondName, FirstName", with the SecondName part of the new data component being uppercase and presented in bold font. If the user would like to re-use this 693292 -104o operation, then a mapping should be created.
SThis result can be achieved using the GUI 1200 by the user selecting the first data component of the mapping, the column of data titled SecondName, and dragging this column 1310 to a blank region of the workspace 1202. This occurs as described above.
00 The user then selects the second data component of the mapping, in this case the column of data titled, FirstName, and drags this column 1320 to a position that partially overlaps San existing column in the new data view. The new column can partially overlap to the left or right. As with the data appending drag operation described previously with reference to Figs. 12A to 12F, a shadow of the dragged column is shown during the drag.
If the column is dropped in a partial overlap position, the data browsing application 120 will assume that the two columns should be combined (concatenated) into a single column 1350 as shown in Fig. 13B. The concatenated column 1350 assumes the name of the left-most column, in this case SecondName.
Alternatively the combination operation between the two columns can be indicated by the user first dragging the SecondName and FirstName columns to the first two columns of the new data view. The user can then select both columns by using the keyboard 1102 and mouse 1103 by way of CTRL or SHIFT left click operations, and then choose the contextual menu 1292 option to Combine the selected data components.
This procedure will result in the two columns being concatenated into a single column as shown in Fig. 13B. If the user wished to just perform a transformation on a data component column of a table), the user could select the data component and then choose the Transform option on the contextual menu 1292.
The process of defining the transformation associated with the mapping is substantially as described in Sections 3 and 4. The user selects an example of data by 693292 O -105o clicking on a cell in the table using the mouse pointer 1103. So, for example, in Fig. 1313B, the user has selected the cell 1360a of the table 1350. The user edits the text of the example to indicate to the data browsing application 120 how the data for this column is to be transformed. In this case, the SecondName part of the new data component is O0 50 converted to uppercase and a comma and space inserted between the two source data Scomponents, as depicted in a separate cell box 1360b in Fig. 13B, for the sake of clarity.
The user has also applied the bold style to the SecondName part of the new data component. On detecting a pressing of ENTER on the keyboard 1102 by the user to indicate completion, the data browsing application 120 analyses the edited example using the method described in Section 4 and infers the transformation indicated by the user's edited example.
The user can then accept the inferred transformation or modify it using the method described in Sections 3 and 4. The user can also specify whether the performed operation should be saved as a mapping. The default behaviour for this property is preferably stored as a user preference. If the user selects to create a mapping, the name of the data component will be registered as the target data component name for the mapping. The mapping will be created in the user's namespace.
The updated data in table 1350 is shown in Fig. 13C. The user can then select the title 1380 of the column and rename it to MyName. As will be appreciated from an example name 1382, the second name is capitalized and bolded, and separated from the first name by a comma and space. If a mapping is being created these presentation characteristics are preferably stored as part of the mapping. Preferably the created mapping is saved immediately to the user's mapping set. It is stored as part of the data view's definition, but this definition is only saved when the user selects to do so. It 693292 O -106- U should be clear that other data-based GUI methods for defining new mappings can be implemented without departing from the scope of the present disclosure.
7. Maintaining Queries for Data Views The previous section describes how the user can manipulate the data associated 00 with existing data views in a GUI to visually create a new data view. The methods (Ni described can also be used to modify existing data views. For example, a user can select Sand delete a data series from a graph. Both the processes of creating a new data view and modifying an existing data view involve maintaining a query expression for each displayed data view. This process of internally maintaining queries for data views is now described.
The XQuery expression is associated with the root node of the displayed XML data of the data view. It is this expression that the data browsing application 120 uses to obtain data, from either the Intranet or the Internet, for the data view. XQuery expressions can be represented as a tree structure. Preferably the XQueryX syntax (see http://www.w3.org/TR/xqueryx) is used but other query tree structures could also be used. An example of an XQueryX representation of an XQuery is shown in Fig. 58. This is the XQueryX form of XQuery Example 1 above. It represents the query for the Project data view shown in Fig. 12A. In the query tree structure individual components of the query for clauses) are broken into distinct node trees forAssignment nodes). This enables an iteration operation, for example, to be extracted or copied as a sub-tree to another sub-tree. This is essentially the process that must be performed when a user copies a data set from one data view to another.
Preferably the queries that are generated by user-indicated manipulations of data are expressed in terms of the data sources and not in terms of data views from which the 693292 -107- O data may have been copied. This means that the generated query is independent of other data view definitions and can be exchanged with others, without the other users having access to the original data view definitions from which the data was copied. It also means that data components can be copied from data views that might have sensitive 00 information, without necessarily releasing the source data view to others. As mentioned in Section 2, the preferred arrangement assumes that data security is maintained at the data source level. Furthermore the data for the generated query can be obtained directly from the necessary data sources without having to read and process any interim data views.
Alternative arrangements may generate queries which depend on the data views from which new data views are constructed source data views are treated as data sources). Although this method may make the process of creating new queries simpler, the process of obtaining data for the generated queries is more complicated because it involves accessing and analysing the definitions of all the data views involved in the generated query.
When a data view is selected for presentation the query associated with the data view is parsed into a query tree. The data view manager object 3125 of Fig. 31A that is associated with the data view uses this query tree to obtain data for the data view. The resulting data is then analysed and presented as described in Section 5 by a corresponding data view presenter object 3120.
A data view may contain hyperlinks to further XML data. If the user follows these hyperlinks, a new XML document results and it is displayed using the abovementioned process. The result is a new data view (with its associated query) which is displayed in the same grid position in the workspace. The query associated with this new 693292 108- O data view is derived from the previous data view and the hyperlink. If the user selected to save the data view at this point, then a data view corresponding to the currently displayed data is saved. Thus when a user is using the data browsing application 120 to browse through a data source, the user is presented with a series of implicit data views each of 00 which can be manipulated and saved as re-useable explicit data views. An explicit data N view is one which is associated with a stored data view definition (see Section 11).
The analysis process described in Section 5 also associates the data components of the data view with path expressions and iterators that specify how the data is obtained with relation to the data view's XSDOM data 3130. So, as described in Section 2, a data set (such as displayed in the column of a table) is specified by an iterator and an optional path relative to the iterator. For example, in Fig. 12A, the data browsing application 120 associates the table column PatentEstimate with an iterator of Project and a path expression of PatentEstimate. When this data set is copied into the new data view in Fig. 12E, this data set acts as a data series, which is associated with an independent data set x-axis) derived from the Project Patents 2002 data view having an iterator, Project, and a path, ProjectCode.
Although the iterator is the same in both cases, these iterators refer to the XML data being viewed the iterator with respect to the data of the return clause of an XQuery flwor expression). In order to ascertain whether the copy operation is allowable, the data browsing application 120 must resolve these data iterators with respect to their sources. In other words, the iterator must be converted into a source data path, which completely specifies the path for the iterator with respect to its source. For example, the source data path for the iterator associated with the PatentEstimate data set of the Projects data view is: 693292 O -109- U document("http://www.example.com/Projects?/ProjectsDB")Year a, [.=2002]/Project.
Source data paths are discussed further later in this section.
Each data manipulation the user performs is first checked for compatibility as 00 described by step 5425 in Fig. 54. If the manipulation is compatible, then the data view Smanager object 3125 (Fig. 31A) of the data browsing application 120 effects the Smanipulation. For example, if a data set has been copied to a particular column of a table of a data view, then the data view manager object 3125 of the receiving data view is informed that the data set is to be added to the current data view in the role of a particular column number. The copied data set is identified by its source data view, its iterator and its path (relative to the iterator). The data view manager object 3125 then updates the query associated with the current data view, if possible, to account for the manipulation.
The data manipulation processes that are implemented in the preferred arrangement of the data browsing application 120 are: 1. Copying data component(s) to a data view; 2. Applying a filter to a data view; 3. Specifying a sort order for a data view; 4. Transforming a data component; Combining two or more data components; 6. Hiding a data component; 7. Renaming a data component For each of these operations the data view manager object 3125, associated with the data view being manipulated, updates the query for the data view. These operations typically involve data sets and data series, however some operations can also apply to a 693292 O -110- O data node copying a single node to a tree data view, renaming or hiding a data component). If the user selects to copy, filter, sort, transform or combine a data series, then the data series is treated in a similar way to a data set.
SWhen an operation involves a data set, the iterator associated with that data set is 00 used to update the query of the receiving data view. The iterator informs the data view manager object 3125 of the repeating structure associated with a collection of data values.
SThe path associated with a data set informs the data view manager object 3125 of the relative location of the XML element or attribute (providing the values) with respect to the iterator element. The path is typically used when a group of data sets use the same iterator columns of a table which share the same parent iterator element). A path does not need to be specified. If it is not, then it is assumed that the iterator specifies the entire path to the values of the data set. The different manipulation processes will now be described in more detail.
7.1 Copying Data Components to a Data View When the user cuts/copies or drags a data component from a data view in the workspace and selects to paste/drop that data component in another data view, the data view presenter object 3120 of the data browsing application 120 calls one of the following methods on the data view manager object 3125 for the target data view: 1. addDataSet(DataSet source, DataSet target) 2. addDataSet(DataSet source, DataSeries target) 3. addDataSeries(DataSeries source, DataSet target) 4. addDataSeries(DataSeries source, DataSeries target) addDataNode(DataNode source, DataNode target).
693292 -111- SIn each of these methods, the source argument refers to the copied data. It specifies the source data view object, and the data node, set or series in that data view.
The target argument indicates the data set, series or node in the target data view after which the copied data is to be added.
00 5 Which of the above five methods is called depends on the display type of the target (Ni data view, the type of data that has been copied (data node, data set or data series) and the (role of the copied data. The role of the copied data is determined by the drop or paste location in the target data view. If the user has copied a data set from a table or from the x-axis of a graph), then the data view presenter object 3120 can call either the first or second method depending on the drop position. If a data series has been copied, the data view presenter object 3120 can call either the third or fourth method depending on the drop or paste position. When a data series is added to a data view in the role of a data set column of a table, x-axis of a graph) then the data series is treated substantially as a data set.
If a data node is copied then only the fifth method can be called. If it is called and the drop or paste location in the target data view implies that a data set is expected then the data view manager object 3125 will report that the manipulation is not allowed. In the preferred arrangement the fifth method is only used to manipulate nodes of a tree. If there is no existing data specified in the target data view, the target argument can be set to null.
Preferably, a null target argument is only valid if the target data view contains no data it has just been created using the New Data View menu option as described in Section 6).
The process of adding a data set to a data view is now described in more detail with reference to Figs. 55 to 61 and the example described with reference to Figs. 12A to 12F.
693292 -112- 0 Specifically, the process of copying the PatentEstimate column of the Project data view to act as a data series in the new data view depicted in Figs. 12D, 12E and 12F will be described. The XQuery definition for the source data view is that of the Project data view as shown in XQuery Example 1. This XQuery is depicted as a query tree in Fig. 58.
00 The XQuery definition for the new bar chart, which is the target data view of the N manipulation, is as shown in XQuery Example 4 and is depicted as a query tree in Fig. 59.
XQuery Example 4 let $patents document("http://www.example.com/Patents?/Patents") for $p in distinct-values($patents/Invention[Year=2002]/ProjectCode/text()) return <Project> <ProjectCode> $p <ProjectCode> </Project> Note that when the ProjectCode data set was copied to the new bar chart from the Project Patents 2002 data view, the query for the bar chart maintained the distinctvalues function, which was used to get all the distinct project codes that are associated with Invention elements for the desired year.
Fig. 55 shows a flowchart of a method 5500 of adding a data set to an existing data view which, again, is preferably implemented as a part of the data browsing application 120. In step 5505, the source data set/series is identified by the user 693292 O -113- O interacting with a GUI such as shown in Figs.12A-12F. The source data set/series can be Sidentified by a copy or cut operation, or alternatively by the initiation of a drag operation.
So in the example, the user may have selected, and commenced to drag, the ,PatentEstimate column in the Project data view shown in Fig. 12D. Since the dragged r'- 00 data component is a data set, then the data view presenter object 3120 associated with the (Ni N target data view of the drag operation will ultimately call one of the first two methods Slisted above.
In step 5510, the paste/drop location is used to determine which data set/series of the target data view is to be treated as the target data set/series. For example, if the user has dropped a data set on the y-axis of a graph, then the data view presenter object 3120 associated with the target data view would first see if any data series already existed for the graph. If so then it would identify the last data series as the target data series and call the addDataSet(DataSet source, DataSeries target) method of the corresponding data view manager object 3125. If no data series existed for the graph, then the data set corresponding to the x-axis would be selected as the target data set, and the addDataSet(DataSet source, DataSet target) method would be called.
Alternatively, if the drop/paste location was between two legend items then the target data series would be set to that data series corresponding to the legend item immediately before the drop/paste location. If the source for the manipulation is a data series then clearly the third or fourth methods are called depending on the paste/drop target. When the addDataSerieso method is called on a data view manager object 3125, preferably that object can use the information about the label of the data series to assist in deciding whether the manipulation is allowable and if so, to update the query for the target data view. For example, the label of a data series can provide 693292 8 -114- O information about a join condition for the manipulation. Join conditions have been discussed in Section 6 and are described in more detail later in this section.
So in the example, depicted in Fig. 12D, the user drops the dragged PatentEstimate data set over the y-axis data component, 1253 of Fig. 12D. In this 00 5 example, there are no existing data series for the new bar chart so the target data set is the Sx-axis data set and the addDataSet(DataSet source, DataSet target) method is called on the data view manager object 3125 associated with the bar chart data view. The data view manager object 3125 then processes this call.
In step 5515 the data view manager object 3125 obtains handles for the source and target query trees. It has a stored handle for its own query tree, the target query tree. The data view manager object 3125 also obtains a handle to the source query tree via the source data set object that is passed to it. In step 5520, a check is made to ensure that both queries are in the form of a flwor expression. Any queries generated by the preferred arrangement will be in this form already, however data source queries need to be wrapped in a document function and have an iteration operation applied.
For example, the data source query: http://www.example.com/Projects?/ProjectsDBYear/[.=2002]/Project can be represented by the flwor expression shown in XQuery Example 5. The preferred arrangement attempts to separate the root of the data source in a top-level letAssignment node as shown below (bound to the $projects variable in the case of XQuery Example The remainder of the path is used as the source data path of the forAssignment node.
XQuery Example let$projects 693292 8 1 U o document("http://www.example.com/Projects?/ProjectsDB") for $p in $projects/ Year/[.=2002]/Project return $p 00 In step 5525 the loop variables associated with each of the data set iterators are determined. Loop variables are those variables that are declared to bind the data of an 0 iteration operation. They thus provide a means for other XQuery operations to reference the results of an iteration process. Loop variables are either defined by a forAssignment node $p in XQuery Example 5) or a letAssignment node $inv in XQuery Example Variables defined via a forAssignment explicitly hold the results of an iteration process, whereas those defined by a letAssignment can be viewed as implicitly holding the results of an iteration process. For example, it may be convenient to iterate through a set of keys using a forAssignment node and then to use one or more letAssignment nodes to obtain data for the individual key values. This process is shown in XQuery Example 3. The key, defined in a forAssignment node, can be treated as a primary loop variable in XQuery Example and the letAssignment variable ($inv in XQuery Example 3) a secondary or dependent loop variable. In cases where there is a one-to-n relationship between data, nested forAssignment nodes are employed.
The determination of the loop variables in step 5525 is required in order to connect the result data of the data views (from which the iterators are derived) with their associated iteration operations in the query, which are expressed in terms of the data sources. This is necessary if the resulting query is to be expressed in terms of the data sources and not the existing data views. This association results in a target query that is independent of its data view sources and depends only on original data sources.
693292 O -116-
(-N
0 The process of determining the loop variable, which corresponds to the iterator for a data set, being step 5525, is now described further with respect to Fig. 57. In general, there are three cases to consider: the data set's iterator is explicitly defined in the return sub-tree; (ii) the data set's iterator is defined via a variable in the return sub-tree; 00 S 5 and (iii) the initial part of the data set's iterator is explicitly defined in the return sub-tree.
4 The first and third cases can occur when all or part of the iterator's path is explicitly Sdefined as tags in the return sub-tree of the XQuery when elements are explicitly constructed in the return sub-tree of the XQuery expression using XQuery's element constructor expressions). The first case is true for the target query depicted in Fig. 59, where the provided iterator, Project is explicitly located as a descendent node 5915 of the elementConstructor node 5910. The second case can occur when the iterator is implied by a variable node. For example, in the case of the source query, the return sub-tree 5805 has a variable node 5806 with the value of $p 5810.
As seen from Fig. 57, if all the elements of the iterator are not explicitly defined in the return sub-tree in decision 5705, then control passes to step 5720 where any initial elements of the iterator which are explicitly defined are removed from the iterator's path.
This will be the case where element constructors have been used to wrap the results of a query. Then, in step 5730, a list of all possible loop variables (and their associated iteration operations) is compiled for the XQuery expression. This involves locating all iteration operations (as defined by letAssignment and forAssignment nodes) for the query and creating a list item for each one. So for the source data set in this example, there is just a single loop variable, $p.
In step 5735, the first item in this list is selected for processing. Preferably, if a single loop variable is identified in step 5730 then control passes to step 5725, this being 693292 -117- 0 depicted by the dashed lines in Fig. 57. Otherwise, in step 5740 the source data path for the iteration operation is generated by parsing the sub-tree of the iteration operation. As mentioned before, the source data path replaces any contained variable names with their Svalues. In the case of the source data set in the source query Fig. 58), the source data 00 path for the loop variable $p is: Sdocument("http://www.example.com /Projects?/ProjectsDB")/ SYear[.=2002]/Project.
The source data path for the loop variable $p in the target query Fig. 59) is: distinct-values(document("http://www.example.com/Patents?/ Patents")/Invention[Year=2002]/ProjectCode).
If, in decision step 5745, the terminal part of the source data path contains the specified iterator, then the loop variable associated with the item is set as the loop variable for the iterator in step 5725 and the process ends in 5790. In the preferred arrangement, the source data path is first converted to a skeletal source data path before the substring search is performed. A skeletal source data path is a source data path with all predicate expressions and functions, with the exception of the document function, removed. For example, the above-mentioned source data path for the source data set loop variable, corresponds to the following skeletal source data path: document("http://www.example.com/Projects?/ProjectsDB")/Year/Project.
Use of the skeletal source data path makes the sub-string search quicker and more robust, however step 5745 can also be performed using the source data path as shown in Fig. 57.
If the specified iterator cannot be identified in the source data path for the current loop variable, then control passes to step 5747. A check is performed to see whether the initial part of the iterator exists in the source data path. If so the data browsing 693292 O -118- Uo application 120 tries to identify descendent elements for that last element detected for the iterator in order to complete the iterator path. Preferably this is achieved by examining the schema definition for the last element of the iterator. This definition should specify any descendent elements for the element of interest. If this is not possible because either 00 5 schema definitions are not available or do not specify the child content explicitly, then the preferred arrangement examines the data associated with the query to identify descendent 0 elements of the last listed element of the iterator. If the iterator path can be completed in
(N
this way then control passes to step 5725 and the current loop variable is assigned to the iterator.
If the iterator path could not be completed then control passes to decision step 5750.
If there are more items in the list, then the next item is selected in step 5755 and control returns to step 5740. If no more items are identified in step 5750, then an unallowable manipulation must have been attempted. This is reported in step 5760 and the process ends in step 5790, thereby enabling a return to step 5528 of Fig. If, in step 5705, all the elements of the iterator are explicitly defined in the return sub-tree then control passes to step 5710. This is the case for the target data set. In step 5710 the query tree is traced back to any forAssignment nodes that correspond to the return node in which the iterator's path was identified same flwor expression).
A flwor expression can have multiple iteration operations, with the results of each operation being bound to a loop variable. In addition, letAssignment nodes dependent on a forAssignment node can define further secondary loop variables. Because return clauses can contain nested flwor expressions, being able to identify the iteration operations that correspond to a particular return clause (and hence flwor expression) of the XQuery expression reduces the search space.
693292 O -119- U In step 5715, the forAssignment nodes identified in step 5710 are examined. If a d single iteration operation (with its binding loop variable) is declared in step 5715, then control passes to step 5725 where this loop variable to assigned to the iterator and the process ends in step 5790. This is the case for the target data set iterator. If more than 00 one loop variable is declared the return expression is associated with more than one
,I
N iteration operation), then control passes to step 5717. In this step the sub-tree associated Swith the element constructor corresponding to the terminal element of the iterator is examined. If this constructor explicitly contains the path associated with the data set, the correct loop variable can be determined from examining the content of the corresponding element constructor(s). For example, consider the case of finding the loop variable for the data set identified by the iterator and path, Project and ProjectCode, respectively, in XQuery Example 3. The iterator Project is explicitly defined in the return sub-tree. The data set's path corresponds to the ProjectCode element constructor, which is contained in the Project element constructor. The loop variable for this data set can be determined by examining the defined content of the ProjectCode element. In this case it uses the variable $p and hence $p can be assigned as the loop variable for the data set.
Alternatively, the data set may be able to be identified explicitly by its path relative to a variable $p/Code) within the iterator's element constructor. The final possibility is that a variable in the iterator's constructor contains the path implicitly (i.e.
the loop variable represents a data set value). If the latter case results and more than one loop variable is possible, then the preferred arrangement resolves the possible loop variables into source data paths and attempts to locate the correct variable by locating the data set's path using the method described for step 5747.
Returning now to Fig. 55, if a loop variable could be identified for each of the 693292 O -120- O source and target iterators in step 5525, control passes from decision step 5528 to step 5530 where the source data paths are constructed for the loop variables. Preferably source data paths, that are constructed during the process of step 5525, are retained for use in this step. If step 5525 resulted in an error then control passes to step 5550 where 00 an unallowable manipulation is reported.
(Ni N- After step 5530, step 5535 operates to update the target query tree, if such is Spossible. Step 5540 checks if the update was possible and, if not, then the process reports an unallowable manipulation in step 5550 and the process ends in step 5590. If the update is possible, the manipulation is considered allowed, and data from the source data view is copied to the target data view in step 5560. This step results in an update of the XSDOM structure 3130 associated with the target data view. The displayed data view is updated to reflect the result of the copy and the process terminates at step 5590.
The process of updating the target query tree (step 5535 of Fig. 55) will now be described in more detail with respect to Fig. 56. In step 5602, the skeletal source data paths for each of the source and target loop variables are constructed. Preferably, if these have been constructed during the processing of preceding steps they are re-used. In step 5605, the skeletal source data paths are compared. If they are identical, then control passes to step 5615 where the source data paths are compared. If the source data paths are identical this means that predicate conditions do not vary for the two iterators and therefore the target data set's iterator can be used as is. If the source data paths are identical, then control passes to step 5630 where the source data set is included in the return sub-tree of the target query tree.
Step 5630 can mean copying the element constructor for the source data set from the source query tree to the target query tree and updating the referenced loop variable to 693292 O -121- 0 be that of the target data set. If, however, the source data set was referenced using a Sexpression involving a loop variable in the source query tree, then this expression is copied to the return sub-tree in the target query and the expression's loop variable is changed to be the same as that of the target data set. In both cases, the source data set is 00 added to the return sub-tree immediately after the target data set. The process then ends Nin step 5690.
SIf the skeletal source data paths are identical but the source data paths are not, then there must exist different predicate expressions in the source data paths of the source and target iterators. The predicate expressions define filtering conditions on the data collected for the query and thus depend on the default join method being used. If in decision step 5618, an outer join method is detected then control passes to step 5619. Otherwise control passes to step 5620 where the source data paths are merged into a single source data path. This operation is only possible for the distinct union and inner join methods.
The result of step 5620 is at least one new iteration operation for the target query. If possible, a single iteration operation, with a common source data path, results.
If the distinct-union join method is used, then the predicate conditions of the two source data paths are merged to generate the union of results of the individual predicate conditions. Predicate conditions are merged for unions using the following rules: 1. If a given element in the source data path has two different predicates then these predicates are joined with the "OR" function; 2. If a predicate exists for an element in one source data path but not for same element in the other source data path, then the predicate is dropped.
If an outer-join method is detected in decision step 5618, then an inner 693292 -122o forAssignment node is created in step 5619 for the source data set. The outer interaction of the target view is left unchanged. Control then passes to step 5630 where the source data set is added to the return sub-tree using the loop variable used by the inner forAssignment node in step 5619.
00 Finally, if an inner-join method is being used, then the predicate conditions of the two source data paths are merged to generate the intersection of the individual predicate conditions. Predicate conditions are merged for intersections using the following rules: 1. If a given element in the source data path has two different predicates then these predicates are joined with the "AND" function; 2. If a predicate exists for an element in one source data path but not for same element in the other source data path, then the predicate is maintained.
The result of the merging process of step 5620 is a new source data path if the distinct-union or inner-join methods are used. This new source data path is used for a common iteration operation in the target data view. In step 5625 the forAssignment node is updated with the new source data path. This involves updating, removing or adding predicatedExpr nodes. For example, if in Fig. 58 the predicate merging process required that the predicate on the Year element was to be removed, then the node 5860 would be made a direct child of node 5865 and the remaining nodes of the predicatedExpr sub-tree 5870 would be deleted from the query tree.
If the skeletal source data paths are not identical in step 5605, then it is necessary to identify a join condition in step 5610 that allows the manipulation to proceed. Preferably, the data browsing application 120 stores a list of skeletal source data path pairs that represent joins within and between different data sources. So in the example described in 693292 -123- 0 Section 6, the following join conditions are registered with the data browsing Sapplication 120: 1. document("http://www.example.com/Projects?/ProjectsDB")/Year/ Project/Code=document("http://www.example.com/Projects?/ 00 ProjectsDB")/Year/ProjectResources/ProjectCode 2. document("http://www.example.com/Projects?/ProjectsDB")/Year/ SProject/Code=document("http://www.example.com/Patents?/ Patents")/I nvention/ProjectCode Each join condition represents two join attributes each specified as skeletal source data path. In the preferred arrangement, only join conditions employing an equal operation are considered. These join conditions may have been recorded as a result of a user indicating the join in the workspace by joining two data components by a join symbol 1222 as depicted in Fig. 12B. In alternative arrangements, join conditions could also be learned and recorded by examining the queries of received data views.
In the preferred arrangement, a suitable join condition is identified as one having one join attribute that acts as a sibling or is the same as the source data set and the other join attribute that acts as a sibling or is the same as the target data set. In other words, each join attribute and its related data set values must share a common parent. The preferred arrangement will favour a join condition that maintains a one-to-one relationship between the source and target data set values, if more than one possible join condition is identified. However, one-to-n, n-to-one and n-to-n relationships are also permitted. A one-to-n correspondence between target and source data set values will result if the join attribute of the source data set has a one-to-n relationship with the source data set values and the join attribute of the target data set has a one-to-one relationship 693292 O -124- 0 with target data set values. N-ary relationships occur when for each join attribute instance Sthere are possibly more than one data set values. Preferably cardinality of relationships is determined by schema definitions, if they exist, or by inspection of the data.
In the described example, the second join condition of the two above-mentioned 00 join conditions represents a valid join condition for the manipulation. The first join attribute of that join condition is a sibling of the values of the source data set which is Sidentified by the following skeletal source data path i.e.: document("http://www.example.com/Projects?/ProjectsDB")/Year/ Project/PatentEstimate.
The second join attribute is exactly matched to the skeletal source data path of the target data set.
If a join condition is not identified for the skeletal source data path pair in step 5610 then the manipulation is flagged as being unallowable in step 5660 and the process ends in step 5690. If ajoin condition for the pair is identified then in step 5640 a source join path is created for each of the data sets. A source join path is the source data path of the join attribute with the predicate expressions of the data set's source data path, added. So, for the source and target data sets, the source join paths for the described example are: document("http://www.example.com/Projects?/ProjectsDB")/ Year[.=2002]/Project/Code, and document("http://www.example.com/Patents?/Patents")/ Invention[Year =2002]/ProjectCode, respectively.
In step 5645, the target query's iteration operations are updated. This step depends 693292 -125- U on what join method is being used distinct-union, outer and inner join). These join methods are described in Section 6.
For a distinct-union join, the process of step 5645 is now described with reference to the method 6000 shown in Fig. 60. In step 6010, an outer forAssignment node is 00 5 created in the target query to iterate through the values generated by the distinct union of the source join paths. Preferably, any redundant distinct-values or distinct-nodes Sfunctions are removed from the source join path arguments of the distinct-values function. In decision step 6020, if the source data set is found to have a one-to-n relationship with values of the target data set, then control passes to step 6050. This information can be ascertained from either schema definitions, if they are available, or from inspection of the data. If a one-to-one relationship exists between values of the source and target data sets then control passes to step 6030.
In step 6030, a letAssignment node is created for the source data set. This assignment is qualified by a predicate specifying the join condition (see XQuery Example The data set is also added to the return sub-tree of the created flwor expression. Preferably, the data set is added as an element constructor, however alternative arrangements may specify the data set using an expression involving the loop variable defined by the letAssignment node created in this step. Preferably the process of creating a letAssignment node for the source data set also involves copying the highlevel letAssignment node from the source query to define the new data source (in this case the ProjectDB data source). This is not essential, however it makes the generated XQueries easier to understand if each of the data sources involved is clearly identified by a variable.
If, in decision step 6050, the display type does not support a one-to-n relationship 693292 O -126o a graph) then preferably the relationship must be compacted using either the count() or sum() functions as described previously in the example in Section 6 for the PersonMonths data set. Preferably, the sum() function is used when the copied data is numerical.
00 r 5 Control then passes to step 6030 and processing continues as for a one-to-one Srelationship with the exception that the data set is specified in the return sub-tree using Sthe selected compaction function. An example of this is seen with the count( function used by XQuery Example 3.
If the display type does support one-to-n data a table), then an inner forAssignment node is created within the outer return sub-tree in step 6055. This forAssignment is qualified by the join condition in the same way used by the letAssignment node in step 6030. Preferably, a test letAssignment is used to test if values exist for each inner iteration, and if no values exist to generate an empty element constructor for the nested iteration. Control then passes to step 6040.
In step 6040 the process ensures that iteration/assignment operations exist for the other sources of data for the target query the target data set) in other words, letAssignment and forAssignment nodes may need to be created for one-to-one and one-to-many relationships, respectively. These nodes may already exist if previous joins have been effected. Finally the process terminates at step 6090.
If an inner or outer join method is being used in step 5645 of Fig. 56, then the process to update the target query's iteration operation proceeds according to the method 6100 depicted in Fig. 61. In step 6110, if the source data set has a one-to-one relationship with the values of the target data set, then control passes to step 6120. In step 6120 a letAssignment node is created for the source data set, the assignment being 693292 O -127- O qualified by the join condition. The data set is also added to the return sub-tree.
SIf the source data set has a one-to-n relationship with the values of the target data set, then control passes to step 6115. If the display type requires a one-to-n relationship to be compacted, then control passes to step 6120, where the resulting letAssignment o00 will result in a list of values for each inner iteration. This list is then operated on by a compaction function in the return sub-tree as described for Fig. 60. If the display type Ssupports a one-to-n relationship then an inner forAssignment node is created for the source data set.
For an inner join the forAssignment node created in step 6125 is added above the outer return sub-tree, however for an outer join this node must be created inside the return sub-tree as described for the distinct-union join method. Preferably the result of the inner iteration in a return sub-tree is first tested for resulting data and if no data exist an empty element is constructed in the returned data. In both of the inner and outer join cases the source data set is added to the return sub-tree. In other words, outerjoins result in nested return sub-trees whereas inner joins only require a single return sub-tree.
Control then passes to step 6140.
In decision step 6140, if an inner join is required then a conditional node is added in step 6145 to ensure that data is only returned if all iterators have associated values.
Alternatively, if a one-to-one relationship exists between the target and source data sets, then this conditional node can be omitted if the letAssignment node created in step 6120 is changed to a forAssignment node treated no differently to a one-to-n relationship between the target and source data sets). Finally the process then ends in step 6190.
The result of step 5645 for the described example is shown in XQuery Example 6 below. The loop variable $p will contain the results of an iteration through each of the 693292 -128o source join paths, with any duplicates removed. That is: document("hftp://www.example.com/Projects?/ProjectsDB")/ Year[. =2002]/ProjectlCode and 00 document("hftp://www.example.com/Patents?/Patents")/ I nvention [Yea r=2002]/ProjectCode) The process depicted by Fig. 56 then ends in step 5690 and control passes back to step 5540 of Fig. 55. The updated query for the target data view is as shown in XQuery Example 6.
XQuery Example 6 let $projects document("http://www.example.com/Projects?/ProjectsDB") let $Patents document("hftp://www.example.com/Patents?/Patents") for $p in distinct-values( projects/Yea =2002]/Project/Code/texto, $patents/I nvention[IYear=2002]/ProjectCode/text() let $proj $projects/Year[.=2002]/Project[Code=$p] return <Project> <ProjectCode> </ProjectCode> <Pate ntEsti mate> {$proj/PatentEsti mate/texto} </Pate ntEsti mate> 693292 -129- 0 </Project> Alternative arrangements could also build into the generated query some resilience to anomalous data. For example, if in XQuery Example 6, the $proj variable contains 00 S5 more than one Project node there existed more than one project with the same N code), then the above query would be unpredictable. The actual resulting behaviour may O depend on how a particular XQuery processor was implemented. It is possible to add checking when creating element constructors for the source data set in the return sub-tree.
For example, the PatentEstimate element constructor could be inserted such that if multiple Project nodes did result with the same code, then a PatentEstimate element would be constructed for each result as shown below.
for $a in $proj <PatentEstimate> {$a/PatentEstimate/text()} </PatentEstimate> 7.2 Applying a Filter to a Data View In the preferred arrangement of the data browsing application 120, the user can specify one or more filters for a data view. Each filter specification can include one or more filter constraints combined with one or more of the Boolean conjunctions AND, OR, or NOT. A filter constraint defines a data component (identified by an XPath expression), a filter operation and a target or value Salary 100,000, Salary AvgSalary). Preferably, filters are treated as a property of the data view because they can involve multiple data components contributing to the data view.
Also in the preferred arrangement filters can only involve data components that are 693292 O -130- O specified by the query are part of the data view). This means predicate expressions Sin the source data paths of iteration and assignment operations are not treated as filters.
Alternative arrangements may permit filter constraints involving data not explicitly fetched by the query. Filters can involve data components that are hidden returned 00 5 by the query but not displayed as part of the data view). Hiding data components is N, described further in Section 7.6.
Preferably, filter specifications can be enabled and disabled by the user. This means that the user can create a set of alternative filter specifications and combine these in different forms for the current data view. This also means that the filter specifications, and their current state must be stored as part of the data view's definition they are not simply integrated into the XQuery for the data view). In the preferred arrangement, the filters are stored as a list in the data view definition (see Appendix Alternative arrangements may not provide for sets of filter specifications in which case the active filter for a data view can be simply integrated into the XQuery in the data view's definition.
Where there are multiple filter specifications for a data view, in preferred arrangements they are combined conjunctively in an "AND" fashion). Thus the active filter all the combined enabled filter specifications) for a data view, f, can be represented by an expression tree of the form: fc AND OR NOT' fc)*, where fc represents a filter constraint which is defined by, fc XPath op String I Number I XPath op 'equals' I 'less-than' I 'greater-than' I 'not' I 'contains' I 'starts-with' I 'ends-with' 693292
I
O -131o The XPath argument is the path of the data component path relative to the root node of the data view. The value of the constraint is represented either as a String (i.e.
XQuery data type of CHARSTRING), Number XQuery data type of NUTMBER) or another data component XPath expression). In other arrangements, other filter 00 S 5 operations and conjunctions may be used. For example, it may not be necessary to limit the combination of individual filter specifications to the conjunction "AND".
SFiltering operations typically map to the where clause(s) of XQuery flwor expression(s). Since XQuery expressions can contain more than one flwor expression nested expressions or a sequence of expressions), an active filter may thus involve the modification of more than one where sub-tree in the query tree. Also, in the preferred arrangement, the user can specify a system preference for filters to be copied with data. So for example, if a data set is copied to another data view, the active filter of the source data view is added to the target data view. This results in a new active filter for the target data view.
The process of setting a filter for a data view is now described with respect to the method 6200 depicted in Fig. 62 which is operable as a part of the data browsing application 120. This process is initiated by the user indicating in a GUI, such as shown in Fig. 12A, that a further filter specification is to be applied or an existing filter specification is modified or removed. The modification of an existing filter specification can include a change of state from enabled to disabled). The process is also initiated whenever a user copies a data component to a new data view with the copy filter preference set. Each of these user-mediated actions results in the list of filter specifications for the current data view being modified. The modified list is passed as a Filter object to the data view manager object 3125 (Fig. 31A) associated with the data 693292 O -132o view, for which the filter is being altered, using the following method: void setFilter(Filter f) The argument f contains a list of filter specifications, with each specification represented as an expression tree of the form described by the EBNF defined earlier in 00 5 this section and having an associated flag, which defines its state (enabled/disabled). In step 6205 of Fig. 62, the data view manager object 3125 extracts those specifications Swhich are enabled from the list of filter specifications in f and generates a single expression tree for the active filter.
In step 6210 the current query for the data view is examined. If the current query is not a flwor expression (as may be the case if the user is browsing through a data source), then in step 6220 the XQuery is converted into a flwor expression. Although the required filter could be applied by way of adding predicates to the XPath expression, in the preferred arrangement the XPath expression is converted to a flwor expression with a forAssignment node being created for the data path specified by the existing XPath expression (as shown in XQuery Example Once the query is in the form of a flwor expression, processing can continue at step 6215. In this step all the current where sub-tree(s) are pruned from the query tree.
These sub-trees may have been involved in a previous filter operation. This step is performed in the preferred arrangement to ensure that the result of previous filtering operations is removed.
In step 6220, a list of XPath expressions involved in the active filter is constructed.
Each filter constraint will define at least one XPath expression identifying the data component on which a filter condition applies. Some filter constraints may also involve a second (target) data component, the value of which is to be compared to a first data 693292 O -133o component of the filter constraint. As mentioned before, these XPath expressions are relative to the root node of the data view. Then in step 6225, a corresponding binding operation as defined using either a letAssignment or forAssignment node) is identified for each of the XPath expressions in the list constructed in step 6225. The 00 identification of the binding operation (and its corresponding binding variable) is i achieved substantially as described in Section 7.1 for the copy methods.
If the XQuery contains a single flwor expression, then all the XPaths will correspond to the binding operations explicit in the forAssignment and letAssignment nodes of that expression. Consequently each of the filter constraints should be able to be expressed in the where clause of the flwor expression using the existing binding variables. So in decision step 6230, if the XQuery contains a single flwor expression, then control passes to step 6235 where a where sub-tree is constructed from the expression tree created in step 6205. This process involves locating all the XPath expressions in the expression tree and replacing them with expressions relative to the binding variable(s). For example, when the Project data view is filtered, as described in Section 6, the single filter constraint involves the XPath expression Project/Manager.
This expression must be changed to be bound to the variable, $p (see XQuery Example On completion of step 6235, control passes to step 6260.
If the XQuery contains more than one flwor expression then control passes to decision step 6240. Multiple flwor expressions can be combined in a sequence a list of expressions) or nested. In the nested case, because the individual filter constraints can be combined either conjunctively or disjunctively, it is not sufficient to treat the constraints as separable just applied to their own fiwor expressions). Sequences of expressions can be treated as separable because the individual flwor expressions are 693292 -134- O essentially independent of each other. If in step 6240 a sequence of fiwor expressions is d Sdetected, then control passes to step 6245. In this step each individual flwor expression is examined and, if one or more data components involved in the active filter arise from that flwor expression, then a where sub-tree is created for the part of the active filter that 00 5 applies to the expression. Control then passes to step 6260.
(Ni In step 6250, the data view manager object 3125 inspects each of the inner flwor Sexpressions. The data view manager object 3125 first ascertains whether any filter constraints of the active filter involve the inner flwor expression. If not then control passes to step 6255. If it does, then a where sub-tree must be constructed for the entire filter f and added to the inner flwor node. In this sub-tree the XPath expressions for data components must be replaced by expressions involving the binding variables loop variables of the inner and perhaps outer flwor expressions). Control then passes to step 6255. If there are multiple inner flwor nodes, then step 6250 is performed for each inner flwor node.
In step 6255 the filter must now be applied to the outer flwor expression. Preferably if none of the filter constraints involve this iteration operation, then it is not necessary to apply the filter at this level and control can pass to step 6260. If filter constraints do involve data components obtained via the outer iteration operation, then a where sub-tree must also be added to this flwor node. However, this where sub-tree must represent the entire filter and therefore may need to refer to data components that are obtained by the inner iteration operation. For this reason, it is necessary to add a test iteration within the where sub-tree of the outer iteration operation. This test iteration basically performs the inner iteration for the purposes of the filter. The test iteration can be created by copying the inner iteration, changing the loop variable of the iteration to use a variable not 693292 O -135- Spreviously used by the query, and then applying the XPath 2.0 exists( function. A Swhere sub-tree can then be constructed using the test iteration and added to the outer flwor node.
In general, it is not possible to de-nest the iteration operations move the inner .00 5 forAssignment sub-tree to be outside of the outer flwor node's return sub-tree) because N this will affect the grouping of the resulting data. Also if an XQuery contains multiple 0 levels of nesting, then steps 6250 and 6255 must be performed for each parent-child pair.
Finally, in step 6260 the filter specification list contained in the Filter object is then stored for the data view and becomes part of the data view's definition. The process ends in step 6290.
The process of Fig. 62 will now be discussed with reference to an example.
Consider the following query, which uses the ProjectsDB data source described in Section 6.
XQuery Example 7 let $projects document("http://www.example.com/Projects?/ProjectsDB") for $p in $projects/Year[.=2002]/Project return <Project> <ProjectCode> $p/Code/text() </ProjectCode> <ProjectName> $p/Name/text()} >/ProjectName> for $r in $projects/Year[.=2002]/ProjectResources[ProjectCode=$p/Code] return 693292 -136- 0 $r/EmployeelD, $r/PersonMonths </Project> t'- 00 5 The data obtained using this query could be presented, using the method described r in Section 5, as a table with four columns of data (ProjectCode, ProjectName,
O
O EmployeelD and PersonMonths) where there is a one-to-n relationship between the first two data sets and the last two data sets. The user may have specified an active filter of the form (Project/ProjectName starts with OR Project/PersonMonths In this case, in step 6220 the XPath expressions Project/ProjectName and Project/PersonMonths are associated with the binding operation using variables $p and $r respectively. Since there are two flwor expressions involved in step 6230, control passes to step 6240 and then to 6250 because the query does not involve a sequence of flwor expressions.
In step 6250, it is necessary to construct a where sub-tree to the inner flwor expression in order to effect the entire filter constraint the active filter involves the data component, Project/PersonMonths). The XPath expressions in the filter are replaced with the relevant binding variables. In this case, the XPath expressions Project/ProjectName and Project/PersonMonths correspond to the expressions $p/ProjectName and $r/PersonMonths. The constructed where sub-tree is added to inner flwor node and control passes to step 6255.
Since the filter also involves a data component that is obtained via the outer iteration operation, a where sub-tree must also be added to this flwor node. However, this sub-tree must reference the data component Project/PersonMonths that is obtained 693292 O -137o via the inner iteration operation. Consequently a test iteration must be constructed for the Souter iteration's where sub-tree. This is constructed by copying the inner iteration, complete with its constructed where sub-tree, replacing the loop variable with a new 4 variable that has not been used in the query, and then applying the XPath exists( 00 S 5 function to the result of the iteration. The constructed where sub-tree is then added to N outer flwor node. The resulting filtered XQuery is shown below in XQuery Example 8.
0 In this example, the nested iteration must be repeated in order to preserve the grouping of the returned data. For example, in the data returned by XQuery Example 6, it would not be possible to move the inner iteration above the return node because it would affect the grouping of the data. Each Project element can have multiple EmployeelD and PersonMonths child elements. If the inner flwor node was moved outside of the outer flwor node's return sub-tree, each Project element would have at most a single EmployeelD and PersonMonths child element. In other words although the data actually returned would be the same, the one-to-n grouping of the data would be changed.
XQuery Example 8 let $projects document("http://www.example.com/Projects?/ProjectsDB") for $p in $projects/Year[.=2002]/Project where exists(for $x in $projects/Year[.=2002]/ProjectResources[ProjectCode=$p/Code] where $p/ProjectName starts-with OR $x/PersonMonths 6 return $x 693292 O -138- 0 return S<Project> <ProjectCode> $p/Code/text()} </ProjectCode> <ProjectName> $p/Name/text( >/ProjectName> oO 0- 5 for $r in $projects/Year[.=2002]/ProjectResources[ProjectCode=$p/Code] where $p/ProjectName starts-with or $r/PersonMonths 6 Sreturn $r/EmployeelD, $r/PersonMonths </Project> Although the preferred arrangement can result in some redundancy the inner where sub-tree could be modified to include only those filter constraints pertaining to the inner iteration operation), the method does not require a specific process for each of the different conjugations of the filter and therefore is readily applied in a general sense. The method described with reference to Fig. 62 can be used for queries representing distinctunion, outer or inner joins.
Filters can be removed from a data view by simply calling the setFilter(Filter t) method with an empty Filter object. In this case, any where sub-trees in the query are simply removed as described for step 6215 in Fig. 62.
7.3 Specifying a Sort Order for a Data View In the preferred arrangement the sort sequence for a data view can be set in either ascending or descending order of a particular data set. Preferably a single sort sequence is permitted for a data view. This may be achieved using the GUI, such as shown in 693292 O -139- O Fig. 12A, by the user selecting the data set to be sorted, then choosing the Sort option on Sthe contextual menu 1292 and specifying either ascending or descending order.
Alternative arrangements could permit sort sequences involving more than a single data component to be specified without departing from the scope of this disclosure.
00 0 5 When the user specifies a desired sort order, a call is made to the following method (Ni N, of the data view manager object 3125 (Fig. 31A) associated with the relevant data view: SsetSortBy(DataSet dataSet, SortDirection direction), where the dataSet argument is as defined in Section 7.1 and the direction argument is set to either ascending or descending.
The data view manager object 3125 first ensures that the query is in the form of a flwor expression as described in Section 7.2 for a filter operation. The data view manager object 3125 then updates the query tree associated with its data view to insert an orderBy node and associated sub-tree in the flwor expression, which defines the iteration operation required by the specified data set. Existing orderBy nodes in the query are removed.
Alternative arrangements could allow multiple orderBy nodes to exist for the data view.
Fig. 63 shows an example of a query having a specified sort order indicated by node 6305. An orderBy node must contain one or more orderField nodes 6310. Each orderField node specifies the data that is to be ordered and the order (ascending or descending). In order to create an orderBy node and its associated sub-tree, the data view manager object 3125 must be able to identify the flwor expression 5882 that defines the iteration operation used by the data set.
The iterator associated with the selected data set can be used to identify first the corresponding loop variable and hence relevant flwor expression, as described in Section 7.1. The path of the data set with respect to the loop variable can then be 693292 O -140- O determined. A new orderBy node and it descendent nodes can then be added to the a) Srelevant flwor expression.
For example, if in Fig. 12A the user had selected the Manager column of the table and selected to sort the data view in descending order for that column, then the above 00 5 method would be called on the data view manager object 3125 associated with that data Sview. The dataSet argument would have an iterator of Project and a path of Manager.
C)The method described in Section 7.1 can be used to determine that the loop variable for
(N
this data set is This implies that the identifier for the order by expression is simply the path, Manager. The one or more child orderField nodes of an orderBy node specify the identifier relative to the loop variable of the identified flwor expression.
So, in the case of the example, the data view manager object 3125 would insert a orderBy node 6305 as shown in Fig. 63. This figure shows the flwor expression 5882 of Fig. 58. The orderBy node 6305 is inserted under the identified flwor node 5882 in Fig. 58. A orderField node 6310 in then added with an identifier child node 6315 which specifies the data set values, relative to the iterator, which are to be sorted.
After updating the query tree, the data view manager object 3125 then updates the data to reflect the new sort sequence. Preferably, this is achieved by sorting the data that has already been fetched for the query. However, it is also possible for the data view manager object 3125 to re-fetch the data for the query and thus use the functionality of data servers to perform the processing associated with the sort operation.
7.4 Performing a Transformation Operation.
Transformations are mapped to functions, which are built into the XQuery expression. In the preferred arrangement, a GUI such as shown in Fig. 12A allows users to specify transformations to apply to a selected data set or to combinations of selected 693292 -141- O data sets. Combination operations are described further in Section 7.5. Further Spreferably, transformation and combination manipulations are only permitted for data sets, however it should be clear that the concept could also be applied to data nodes without departing from the scope of the present disclosure.
00 For example, a user might select the Manager column of the Project data view N shown in Fig. 12A and select to apply the function toUpperCaseo to the data.
O Preferably this action would be achieved using the example-based method described in Sections 3 and 4. Alternatively, the user could select the toUpperCaseo function from a provided list of functions.
Preferably a user's indication to perform a transformation results in the data view manager object 3125 (Fig. 31A), associated with the data view being manipulated, being called to perform the desired transformation and update the query and the associated data.
In the preferred arrangement one of the following methods is called on the data view manager object 3125: 1.void transform(DataSet dataSet, Transform transform) 2.void transform(String newName, DataSet dataSet, Transform transform, boolean createMapping, boolean removeSource) The first method is used when the user wishes just to transform some displayed data in place. In this case the data set is not renamed and a mapping cannot be generated for the transformation. The second method is required if the user desires to generate a copy of the data set to contain the transformed data, assign a new name to the transformed data, and/or create a mapping based on the transformation.
The dataSet and transform arguments specify the data set to be operated upon and the transform type that is to be performed, respectively. Arguments required for the 693292 O -142- O transform see Table 1) are contained within the transform argument object. If the Ssecond method is used, the newName argument should contain the name to be used for the transformed data, if it is to be renamed. The Boolean flag createMapping informs the data view manager object 3125 whether it needs to create a mapping based on the oO transform. The final argument of the second method, the removeSource flag, should be set to false if the original data is to be preserved. The default for this flag is to remove the source data for the transform.
When processing a transform() call, the data view manager object 3125 uses an available library of XQuery functions. These library functions use internal XPath functions wherever possible upper-case($in) in XQuery Example The data view manager object 3125 identifies the necessary function from the library and then inserts the function's definition into its data view's query tree (see Example XQuery 9 below).
Note, that as with filter and sort operations, the data view manager object 3125 must first ensure that the query is in the form of a flwor expression. The data view manager object 3125 must then apply the function to the correct data set in the return sub-tree of the query tree. This means that the function must identify the dataSet identifier in the return sub-tree. This is done in substantially the same way as described for the copy, filter and sort operations described in Sections 7.1 to 7.3. The resulting XQuery for the described example is shown below in XQuery Example 9.
XQuery Example 9 define function toUpperCase(xsd:string $in) return xsd:string return upper-case($in) 693292 -143-
O
<Data> let $projects ("http://www.example.com/Projects?/ProjectsDB" 4 for $p in document($projects)/Year[.=2002]/Project 0 5 return S<Project> 0 $p/Code <Manager> {toUpperCase($p/Manager/text() </Manager> </Project> </Data> If the second method is used with a specified newName argument, then an element constructor with a tag name of newName is added to the return sub-tree of the query.
The content of this new element will be the result of applying the function to the original data (as shown above in XQuery Example The Boolean value of the removeSource flag will specify whether the original data set should be removed from the return subtree. If the createMapping flag is true, then a mapping will be stored for the data view as described in Section 11. Another user, receiving this data view would be able to choose whether he/she wanted to import the mapping for further use. In other words, the mapping can represent a re-usable transformation that can be shared with others.
Nested transformations can be performed by making repeated calls to the abovedescribed methods.
Performing a Combination Operation.
Data manipulations involving combinations of data components can also be 693292 -144- 0 processed by the data view manager object 3125 for its associated data view. These Scombinations may or may not also involve transformations. Typically, combinations of data components result in new element constructors in the query trees. Like transformation operations, the data view manager object 3125 must first ensure that the
O
0 5 query is in the form of a flwor expression as described for filter operations in Section 7.2.
NI In the preferred arrangement, the user can indicate a combination is required by O selecting two or more data sets in a data view columns in a table) and then choosing the combine option on the contextual menu 1292. Alternatively the user can select to combine two data sets as he/she is dragging new data into the data view as described previously with reference to Figs. 13A to 13C. Preferably, the user can define the desired combination using the example-based approach described in Sections 3, 4 and 6.
Alternative arrangements may require the user to specify the combinations functionally.
The resulting request for a combination operation may involve one or more binary or nary operations, as described in Sections 3 and 4.
In the preferred arrangement, combinations are processed by calling one of the following methods on the data view manager object 3125 of Fig. 31A associated with the data view being manipulated.
1. void combine(String newName, Operation op, DataSet dsl, DataSet ds2, boolean createMapping, boolean removeSources) 2. void combine(String newName, Operation op, DataSetList dataSetList, boolean createMapping, boolean removeSources) A combination operation, involving a series of transforms and binary and/or n-ary operations, is preferably broken up into its integral operation components and individual calls are made to the transform(.) method(s) and the above two combine(.) methods.
693292 O -145 o Operations are performed on a left to right basis as described previously in Sections 3 and 4.
Each combination operation can be associated with an optional newName argument. If provided this argument will be the name of the element created in the 00 r- 5 XQuery to hold the result of the combination. If it is not provided null) then the name of the first data set will be used. An error results if the Boolean flag removeSources is false and a newName is not specified. This is because the resulting XQuery will have two elements with the same name and possibly the same namespace.
The default value for the removeSources flag for combinations is true.
For binary operations, the first method should be used with the op argument specifying the desired operation. The binary operations supported by the data browsing are listed in Section 3. That list may be supplemented to contain further or different operations from those listed for the preferred arrangement. The data set arguments, ds and ds2, refer to the data sets on which the operation is being performed.
For n-ary operations, the second method should be used. As for binary operations, the op argument defines the desired operation that is to be performed on all the data sets in the dataSetList argument. The list of n-ary operations supported by the data browsing application 120 are listed in Section 3. As with binary operations, it should be clear that this list could contain further or different operations.
Mappings can also be created for combinations, just as they can for transformations.
If the createMapping flag is set to true then the data view manager object 3125 will create a mapping for the combination as described in Section 7.4.
7.6 Hiding a Data Component In the preferred arrangement the user is also able to "hide" a data component of a 693292 O -146- Sdata view. This means that data for the data component is still collected as part of the a) Squery, however the data is ignored for presentation purposes. Preferably the user hides a data component by first selecting the data component in the GUI 1200 and then selecting the Hide option from the contextual menu 1292. This action results in the following 00 method being called on the data view manager object 3125 of Fig. 31A associated with N the data view in which the data component exists: 0 void hide(DataComponent dc) The dc argument can represent a data node, data set or data series.
The process of hiding a data component is now described with reference to Fig. 64.
In step 6405 the data view manager object 3125 of Fig. 31A examines the query associated with its data view. If the query is not a flwor expression, then, according to step 6410, the query is converted to a flwor expression as described with reference to XQuery Example 5 and step 6220 of Fig. 62. Control then passes to decision step 6415.
If, in step 6415, the data component to be hidden is represented by an element constructor in the return sub-tree of the flwor expression, then control passes to step 6425. If this is not the case, then in step 6420 an element constructor is created to represent the data component. This step may be required if the data component was being previously obtained from attribute constructor or derived from a variable addressed element or attribute.
In step 6425, the data view manager object 3125 generates an attribute constructor for the hidden attribute, if it does not already exist, and the value of this attribute is set to true. The hidden attribute is defined to exist for a namespace, associated with the data browsing application 120, and therefore should not conflict with other data components used by data sources. The process then ends in step 6490.
693292 O -147- SThe data view presentation processing described in Section 5 effectively ignores 0) Sdata components marked as hidden. The user can select to view hidden data components by selecting the View Hidden Data Components option of a data view's contextual menu 1292. This results in the data view being presented with all data components 00 displayed. The user can then use the View Hidden Data Components option as a (Ni toggle to view the data view without hidden data components displayed.
SA hidden data component can be made visible (ie. unhidden) by the user selecting a displayed hidden data component in the GUI 1200 and then selecting the Set Visible option from the contextual menu 1292. This action results in the following method being called on the relevant data view manager object 3125: void unHide(DataComponent dc) The data component indicated by the argument dc is have its' hidden state removed.
Preferably this method sets the value of the hidden attribute for the data component to be false in the return sub-tree of the query. Alternative arrangements could remove the attribute from the data component's element constructor. The data view presentation process is performed again, resulting this time in a data view including the specified data component.
When hidden data component(s) are presented as part of the data view the presentation process, described in Section 5, may result in a different set of allowable display types. Preferably the display type used to present the data is not changed from that display type used before the Set Visible or View Hidden Data Components actions were initiated by the user.
7.7 Renaming a Data Component Data components can also be renamed. In the preferred arrangement the user can 693292 -148o select a data component in the GUI 1200 of Fig. 12A a column of a table, a grid unit, d etc.) and alter the name of the data component. This results in the following method of the data view manager object 3125 of Fig. 31A associated with the data view being called.
00 0. 5 void rename(String newName, DataComponent de, boolean createMapping) Since a data node, data set and data series are all specialisations of a data Scomponent (see Section then objects of these types can be passed as arguments.
As with previous methods described, the data view manager object 3125 must first locate the data component in the result sub-tree. This method may need to interrogate the data component for its type data node, data set or data series) in order to locate the correct identifier in the return sub-tree. Once located, an element constructor, with a tagName having the identifier specified by the newName argument, is added to the return sub-tree and the element corresponding to specified data component removed.
The content of the data component is unchanged by the rename operation. If the data component was previously represented by an element constructor (rather than a path with respect to a variable), then preferably the name of the element constructor is simply modified. In other words, the data view manager 3125 just needs to change the name of the tagName node in the return sub-tree.
If the newName argument does not conform to the requirements of a tag identifier it contains spaces), then preferably the data view manager object 3125 creates an attribute constructor for the dcname attribute, in the located element constructor for the data component. It sets the value of this attribute to that of the newName argument. As with the hidden attribute, the dcname attribute is defined to exist for a namespace, which is associated with the data browsing application 120.
693292 -149- SAs with transformations and combinations, rename operations can also be saved as Smappings.
8. Specifying Variable Data for the Assembly of a Document Set A data browsing application, substantially as described in Sections 1 to 7 of this 00 00 5 description, can also be used to specify variable data for the assembly of a document set.
N An application which assembles a document set from variable data for printing purposes Sis referred to as a variable data printing (VDP) application. Variable data printing applications generally use a document template (or master) which contains the static information to be shared by all documents of the document set and slots for variable data.
The variable data typically varies for each document in the document set. The process of creating the document set involves instantiating the variable data for each of the variable data slots of the document template. The resulting set of documents can represent a customised or personalised set of documents and is often used for marketing or customer relations purposes. A method for specifying the variable data to be used for the assembly of a document set will now be described with respect to a VDP application. The described method can also be used by other applications which involve the creation or assembly of a set of viewable documents from variable data applications used to view database data in forms designed with static data).
An example GUI 6800 for a VDP application is shown in Fig. 68. The GUI 6800 is substantially as described with respect to Figs. 12 and 13. Datamarks from the datamarks panel 1205 can be selected to view associated XML data as a data view 6810 in a data browsing workspace 6890. The data browsing workspace 6890 is substantially similar to the workspace 1202 depicted in Fig. 12. The data view 6810 is presented using the most appropriate display type using the method described in Section 5. The example GUI 6800 693292 -150- Sshows the data view 6810 displayed using a table display type. Each of the columns of Sthe table represent data sets which may be used as variable data in a VDP job.
To the right of the data browsing workspace 6890, is a document template panel 6822. The panel 6822 displays a current document template 6825. Above the 00 0 5 document template panel 6822 is an open document template control 6820 where a user can select a document template to use for the current VDP job. Preferably, document Stemplates have already been authored and stored using a standard markup language such as Personalised Print Markup Language (PPML) version 2.1. PPML is an industry standard XML syntax designed specifically for digital print projects, in particular for efficient printing of documents with reusable and variable content. PPML is being developed by PODi (see http://www.podi.org). The root PPML node of a PPML document can contain one or more DOCUMENT SET elements. Each DOCUMENT SET element can in turn contain one or more DOCUMENT elements.
Typically a DOCUMENT element represents the binding of layout information template) and some instantiated variable data. A DOCUMENT element can include multiple PAGE elements, and each PAGE element comprises one or more MARK elements. It is the MARK elements which specify the actual placement of marks on a page. In the preferred implementation, a document template (for a VDP job) is represented in PPML by specifying a document set containing a single document. The position of static and variable data slots are indicated by MARK elements in the one or more PAGE elements required for the template. Preferably, MARK elements representing variable data slots are differentiated from those representing static data by the presence of a "variable data" text string in an INTERNALDATA element in the content of the MARK element. Clearly, other methods of distinguishing variable data 693292 O -151- SMARK elements from static data MARK elements may also be used.
d SOther storage forms for document templates can also be used Adobe's Portable Document Format Document templates can be opened using a general purpose file open dialog commonly employed by Windows T M software applications.
00 Alternative implementations may provide a document template authoring function which N resembles the authoring environment provided by many existing VDP applications, such Sas PrintShop Mail (Atlas Software BV) and DesignMerge (Banta Integrated Media).
Data sets columns of a table) of a data view, such as 6810, are selectable.
Sections 6 and 7 describe how data sets, such as columns of a table, can be selected and dragged to a workspace, 1202 or 6890, to create a new data view 6810. In a substantially similar way, it is possible to select and drag a data set from a table or graph to be associated with a variable data slot in a document template 6825 in the document template panel 6822. This process will now be described in more detail with respect to Figs. 65 to 71.
Fig. 65 depicts a first method 6500 for specifying data for the assembly of a document set. As with the previous methods, the method 6500 is preferably implemented as software executing on the computer 1100. In an initial step 6505, a document template 6825 containing at least one slot for variable data is displayed in the document template panel 6822. The example document template 6825 of Fig. 68 is shown in detail in Fig. 69. Preferably the document template 6825 contains static data such as letterhead detail 6901, images and text) which will be the same for all the documents of the assembled document set. The template 6825 also contains variable data slots, such as the indicated slots 6905, 6910, 6915, 6920, 6925,6930,6935,6940 and 6945 seen in Fig. 69. The document template 6825 shown in Fig. 69 has been designed for the purpose 693292 O -152o of assembling a set of documents for each of a company's customers, with each Scustomer's document showing the products that the particular customer has already purchased, a graphical representation of that customer's internet usage over the last 12 months and a suggested product for purchase by the customer.
00 00 5 In step 6510, the user can then select a desired data view by either selecting to view (Ni N a datamark or opening a location using the File menu or the open location control 1208 of Sthe GUI 6800. The data view 6810 is displayed in the data browsing workspace 6890 using the most appropriate display type (as described in Section The user may select to follow one or more hyperlinks before locating the correct data view.
Alternatively the user may construct a new data view from two or more existing data views in the data browsing workspace 6890 using the method described in Sections 6 and 7. The resulting data view may represent a join between two or more heterogeneous data sources. The user may also select to create new data sets in the data view by using the combine function described in Section 7.5. This may be useful if the data, which the user wishes to use in the VDP job, needs to be derived from one or more database fields.
For example, the names of customers may be stored in a database as first names and second names. A combination operation could then be performed as described in Section 6 with respect to Fig. 13. Similarly, transformations may be performed to ensure that all the data for a data set was in a consistent form consistent use of upper and lower case). The preferred method of performing transformations is described in Sections 3 and 4. Preferably it is not necessary for the user to save the results of these data preparation steps. They can be performed as part of the preparation phase for a particular VDP job.
So, for example, the user may have selected to view or prepare a table data 693292 -153- Sview 6810 such as shown in Fig. 70. The data view 6810 displays a set of data sets a) Shaving the following names: Customer Name; Address 1; Address 2; Address 3; Internet Usage and Products. It should be noted that the data in the data view 6810 helps to explain to the user what is actually stored for each data set. A user simply 00 0 5 presented with the list of data set names may not understand how the address is (Ni Sdistributed between the data sets having names, Address 1, Address 2, and Address 3.
SThe user may also not immediately know what data is stored for the Internet Usage data set name. In a preferred implementation the user can browse the data by following one of the links in the Internet Usage column of the table data view 6810 to better understand what data is contained therein. Finally, a list of data set names may not highlight the oneto-many relationship between the first data sets of the table and the data set named Products. By seeing the data in the data view this relationship is readily apparent to the user. The user can also see that customers may have a variable number of products, a fact that may be important when preparing a document template 6825 and when specifying data for the assembly of a document set.
In step 6515 the user selects an identified data set from the displayed data view. An identified data set represents a data set that is selectable in the data browsing workspace 6890. In other words, it is a data set on which transformation operations can be performed a column of a table, a set of labels or a data series from a graph or plot). The data sets displayed in a data view 6810 are ordered either as a result of their element order in an XML document or as a result of an orderBy clause of an XQuery expression. The user can select to re-order a data view, and thus all or some of the identified data sets contained therein, by selecting a data set in the data view 6810 and choosing to order that data set in either ascending or descending order. Reordering is 693292 O -154o described in more detail in Section 7.3.
d SThe user may then proceed to associate the selected data set with a variable data slot in the document template 6825 in step 6520. Preferably this association step is achieved by the user selecting a data set from the data view 6810 and dragging the selected data set 00 to a slot in the document template 6825. This association step instructs the system how to instantiate the target variable data slot in the document template 6825. In step 6530, the selected ordered data set is identified as variable data to be used to assemble a set of documents from the document template 6820. The data specification process then ends at step 6550.
So, for example, the user can select the Customer Name data set 7005 and drag this data set to the first variable data slot 6905 of the document template 6825 shown in Fig. 69. This association informs the VDP application that a document is to be generated for each member of the Customer Name data set. Similarly, the data sets named Address 1 (7010), Address 2 (7015) and Address 3 (7020) can be dragged to variable data slots 6910, 6915, and 6920, respectively.
Preferably, as the variable data is specified using the method 6500 depicted by the flowchart in Fig. 65, the variable data slots of the displayed document template in Fig. 68 are instantiated with the values of the first members of the data set. The user can then use various document controls 6860 to step through the documents of the generated document set if the user wishes to check the resulting document set. The user can also select to view the properties of each variable data slot to see details of the origin of the variable data. This document preview function, which is provided by most existing VDP applications, provides a means for checking the instantiation of the variable data and the resulting layout of the documents of the document set. However, such a document 693292 O -155o preview function represents a very slow and tedious method of checking the variable data d itself.
Preferably on saving the result of one or more data specification steps, the variable data is stored with the document template in a PPML document. This PPML document is 00 00 5 derived from the document used to represent the template. DOCUMENT elements are created for each document of the document set and variable data is instantiated for each Svariable data MARK element. Alternatively, the document template can be stored with queries associated for each of the variable data slots using any one of the many proprietary formats used for VDP Variable Data Intelligent PostScript PrintWare (VIPP) by Xerox Corp.]. These queries can be generated using the method described in Sections 6 and 7. The PPML format can also be used to store queries for each variable data slot a single DOCUMENT element is then required). The variable data can then be fetched by a separate process, possibly at print time.
Fig. 66 depicts an alternative method 6600 for specifying data for the assembly of a document set. The method 6600 enables a user to specify variable data which may have a many-to-one relationship with the variable data specified for other slots of the document template 6825. In step 6605 a document template containing at least one variable data slot is displayed. Step 6610 then proceeds in the same manner as described above for step 6510. So, for example, the table data view shown in Fig. 70 is displayed in the data browsing workspace 6890 of the VDP application. Clearly the Products data set 7030 has a many-to-one relationship with the other columns of the table shown in Fig. 70. To instruct the system to include just the first product for each customer, the user can select the first data member of the Products identified ordered data set in step 6615 and associate that member with a variable data slot in the document template 6825 in 693292 -156- U step 6620. In other words, the user could select the product, "Colour Magic" 7040 for the customer "Mr Bill Brown" 7035 and drag this data member to be associated with the variable data slot 6925 of the document template 6825.
The association step 6620 results in an ordered subset of the identified ordered data 00 5 set, which contains the selected member, being derived in step 6630 using the selected member as an indicative member or example. The deriving process requires that a master (ordered data set (or iterator) is associated with the document template. The master ordered data set specifies how the document set is to be created from the document template there is a document created for each member of the master ordered data set). Each of the members of the resulting derived ordered subset will have a corresponding member in the master ordered data set. Typically, the master ordered data set will have been associated with a variable data slot which has already been specified for the document template 6825 the Customer Name data set 7005 in the example depicted by Figs. 69 to 71). Preferably the master ordered data set has been specified by the user using the method 6500 described using Fig. 65. If no variable data has been specified before the method 6600 depicted in Fig. 66 commences, then the first ordered data set displayed in the currently selected data view 6810 the first column of a table, x-axis of a graph) will be assumed to be the master ordered data set for the document template. Alternatively, the user can explicitly specify the master ordered data set by, for example, using a pointing device such as a mouse to point to the required ordered data set.
A preferred approach to deriving the ordered subset is to use the member position information. As such, if the first Products member for a customer is associated with a slot, then a subset of the Products ordered data set 7030 is created consisting of just the 693292 -157- Sfirst Products member for each customer. This would result in documents being Sgenerated for each customer with the name of the first listed product for the customer being instantiated in the variable data slot 6925. The order of data members in the derived subset is the same as that in the ordered data set from which the subset was 00 05 derived.
(Ni When a subset is derived, if a customer has no product entries then a blank or empty entry is created for the customer in the subset. This results in a document where no products are listed. Alternatively, the VDP application may use conditional logic, substantially as described for existing VDP applications such as Microsoft WordTM 2002, to decide that if no products were listed for a customer then a document would not be created for the customer.
The subset of the ordered data set could also be derived on the basis of the value of the selected data member. So, in this case, if the selected data member consisted of the text "Typing Tutor" then a subset may be derived based on a similarity of a data member to the text value of the selected data member. Preferably the user can specify whether this alternative method of deriving a subset is to be used.
After the ordered subset is derived in step 6630, it is identified in step 6640 as the variable data used to assemble the document set. The data specifying process then ends in step 6650.
It is possible to use the method 6600 of Fig. 66 to select and associate more than one member of an identified ordered data set with separate variable data slots in the document template. Each selection results in the creation of a distinct subset of the ordered data set. For example, the user may select the member 7045 containing the text "I Can Type" and drag this member to be associated with slot 6930 and then finally select 693292 -158o member 7050 containing the text "Image Maker" and drag this member to slot 6935 in the document template. Each of the association steps 6620 for these selections results in the creation of a subset of the ordered data set named Products. This method enables the user to construct documents where variable data arising from multiple rows or records of 00
O
5 a table are included in each document of the document set. Existing VDP applications Ni, typically require users to reformat data if data from more than one row or record is 0 required to create a single document of the document set.
In the described example, the Customer Name data set 7005 represents the master ordered data set for the document set. In other words, a document is created for each member of the Customer Name data set. A preferred implementation allows more than one master ordered data set to be associated with a document template 6825. This means that the user may display and select data from more than one data view 6810 in the data browsing workspace 6890. In this case, the first association from each data view 6810 results in the assignment of a master ordered data set to the document template 6825. If this feature is used, then the user must make sure that the plurality of master ordered data sets makes semantic sense. For example, data may be stored in two completely separate customer databases. Rather than having to create a new data view representing a join across the two databases, data views from each of the individual databases could be displayed in the data browsing workspace 6890. The user may then select a data set from each of the data views and associate the selected data sets with separate slots in the document template 6825. This results in the document template 6825 having associated a master ordered data set from each of the data views. Therefore if there were m distinct elements in the first master ordered data set and n distinct elements in a second master ordered data set, then the number of assembled documents would be m x n.
693292 O -159- SData sets whose members have a many-to-one relationship with the members of the Smaster ordered data set can also be associated with a single variable data slot in the document template 6825. For example, the user may select the Products data set 7030 and associate this data set with a single variable data slot in the template 6925). In 00 this case the association implies to the system that all the products for a customer should N be listed in each document. Therefore, the documents generated for customers having a Ssingle listed product "Mr Will Spears" in Fig. 70) will contain only that single product instantiated in slot 6925 "I can Type"). However, the document generated for the customer "Mr Bill Brown" 7035 will contain a list of four products instantiated in slot 6925. Preferably the VDP application permits the user to customise how item lists should be presented separated by commas, with bullet points on separate lines, etc.).
Fig. 67 depicts a method 6700 for specifying data for the assembly of a document set. In step 6705 the document template 6825 containing at least one variable data slot is displayed in the document template panel 6822. Step 6710 proceeds as described for steps 6510 and 6610 and displays the data view. In step 6715, the user selects an identified ordered data set from the data view 6810, the members of which represent parent nodes of hierarchical data. For example, the ordered data set may comprise an ordered set of links as shown in the Internet Usage column 7025 of Fig. 70. The target of the link could be a document containing the hierarchical child data an XML or SVG document) or a node in the XML document currently being presented in the data browsing workspace 6890. Alternatively, the hierarchical child data of the selected ordered data set could be displayed as a nested table or tree structure in the data view 6810.
The selected ordered data set is then associated with a variable data slot of the 693292 O -160- Sdocument template 6825 in step 6720 substantially as described for steps 6520 and 6620.
d SPreferably, in step 6730, a graphical object is generated for each member of the selected ordered data set, the graphical object representing a presentation of the hierarchical child data of the ordered data set member. The method of generating the graphical object is as 00 described in Section 5. In other words, each of the generated graphical objects would be identical to that created if the corresponding link in the table in Fig. 70 was followed in the data browsing workspace 6890. Preferably the graphical object is represented in SVG, as described in Section 5. Other representations may also be used for the graphical objects. For example, they may be stored as JPEG images. The resulting ordered set of graphical objects is then identified as the variable data to be used to assemble the document set in step 6740. The process for specifying data ends in step 6750.
In the example depicted in Figs. 69 and 70, when the ordered data set named Internet Usage 7025 is selected and associated with slot 6940 in the document template 6825, preferably an SVG object is created for each of the "more data" links 7060 shown in Fig. 70. These objects are then stored in the resulting PPML document for the VDP job. As described for the methods depicted using Fig. 65 and Fig. 66, an alternative implementation may store a query associated with the slot and generate the graphical representations SVG) at print time. This alternative implementation has the advantage that the size of the print job document PPML document) sent to the printer is much smaller. The disadvantage is that the logic for generating the graphical objects must be included within the printing process.
This method of creating on-the-fly graphical objects tables, charts, plots, etc.) in the documents of the document set has various advantages over the prior art. First, there is no requirement for the user to define parameters for the graphical objects 693292 O -161o template type, colour, bar/line, etc.). These parameters are captured by templates used to d Spresent the data in a data view. These templates are described further in Section Second, if the child data of the ordered data set varies from customer to customer, the generated graphical object will adapt to the data. So for example, if the data of one 0O 0 5 customer is numeric and that of another customer is non-numeric, then a graph and table Ni, may result respectively in the final documents. In a further example, if a member of the Sordered data set contained text data, then a simple text object would be created. Thus errors do not result when the format of the selected variable data for the graphical objects varies from document to document and hence there are fewer wasted documents when the assembled document set is printed.
Preferably the generated graphical objects are identical to those that would be generated if the corresponding links were followed in the data view 6810 of the data browsing workspace 6890. In an alternative implementation, the colours of the template used to present the data could adapt to the colours used by the document template. For example, the process that generates the graphical objects in step 6730 may analyse the document template 6825, such as shown in Fig. 69, to extract the dominant colours and then select graphical object template colours that were harmonious with these colours.
An example of such a process in described in US Patent No. 6,529,202 issued March 4, 2003.
The method 6700 described with respect to Fig. 67 can also be used to specify variable data comprising image data. If the data associated with a data view has links to images, these links could be displayed substantially as the Internet Usage column 7025 of the table shown in Fig. 70. Associating this ordered data set with a slot in the document template 6825 would result in an image being generated for each of the links in 693292 -162o the ordered data set. The images could then be instantiated in the document set either at Spreparation time or at run-time.
The preparation phase for the VDP job using the document template 6825 depicted in Fig. 69 is completed by specifying a conditional expression for the variable slot 6945.
00 5 This process is substantially as described for other existing VDP applications such as (Ni PrintShop Mail Version 4.1 for Windows and Microsoft Word 2002. Preferably the user Scan provide an expression such as: if (Products contains "Typing" or "I Can Type" and not "Typing Made Easy") then return "Typing Made Easy" else return "Images From Cameras" Preferably, data sets involved in the conditional expressions can be indicated by the user using a pointing device such as a mouse to indicate the required data set.
Once data has been specified for all the variable data slots in the displayed document template 6825, the user can select to either save or print the VDP job by pressing buttons 6870 or 6875, respectively, in Fig. 68. Preferably, saving a VDP job results in a PPML document being generated containing DOCUMENT elements for all the assembled documents of the document set. These DOCUMENT elements contain instantiated variable data derived as a result of the data specification steps described above in this section. Fig. 71 shows an example document generated from the document template shown in Fig. 69 and the variable data shown in Fig. In each of the methods depicted in Figs 65, 66, and 67, the user can select to display the data view step 6510) before displaying the document template step 6505).
In another variation, the user may associate a document template as a property of a data view and the variable data slots can be automatically specified based on the order in which the ordered data sets are listed in the data view 6810. In yet another variation of 693292 -163o the methods depicted in Figs. 65, 66, and 67, variable data slots may not be explicitly pred Sdefined for the document template. A slot may be created when a user drops a selected data set at a selected position in the document template. The position may be determined by logical relationships with other components of the template. Alternatively, the 00 physical coordinate) position may be used. Once created, the initial size and Nproperties of the slot may then be altered by the user.
SIndustrial Applicability The above that the arrangements described are applicable to the databases and to arrangements for facilitating vie wing access to data retained by such databases (eg.
including the computer and data processing industries).
The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiment(s) being illustrative and not restrictive.
In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including" and not "consisting only of'. Variations of the word comprising, such as "comprise" and "comprises" have corresponding meanings.
693292 -164o Appendix A This Appendix provides an example of XML code that affords a definition of a view.
~zt 5 <?xml version-='l.0' encoding='utf-8'?> 00 <xsd:schema xmlns 'http://www.cisra.com.au/DataBrowser' xmlns:xsd 'http://www.w3.org/2001/XMLSchema' xmlns:xsl 'http://www.w3c.org/1999/XSL/Transform' xsl:version targetNamespace 'http://www.cisra.com.au/DataBrowser' version <xsd: annotation> <xsd:documentation>XML Schema for Data Browser core attributes (Version Copyright Canon Information Systems Research Australia (CISRA) 2001 All Rights reserved </xsd: documentation> </xsd: annotation> <xsd:clcmcnt name 'DataView'> <xsd:complexType> <xsd: sequence> <xsd:element ref ='Name'> <xsd:element ref ='Description'!> <xsd:element ref ='CreatedBy' minOccurs '0'max~ccurs <xsd:element ref ='DateCreated' minOccurs '0'max~ccurs 'lI> <xsd:element ref 'Query'> <xsd:element ref ='Presentation' minOccurs '0'max~ccurs </xsd:sequence> </xsd:complexType> <Ixsd:element> 693292 -165- <xdeeetnm am'ye=sUtig/ <xsd-element name 'Naeit' type 'string'/> <xsd:element name 'Drescitio' type 'xsd:string'/> <xsd:element name 'Creeread' type 'xsd:trin'/> 00 cs~lmn ae='at~etd ye='s~ae/ M~ Query block for this data view <xsd:element name 'Query'> <xsd:complexType> <xsd:sequence> <xsd:element name 'XQuery' type ='xsd:string'/> <xsd: element ref ='Mappings' minOccurs <xsd:element ref 'FilterList' minOccurs </xsd: sequence> K/xsd:element> Mappings specific to this data view <xsd:element name 'Mappings'> <xsd:complexType> <xsd:sequence> <xsd:element ref 'xsl: transform' minOccurs '1'max~ccurs </xsd:sequence> <xsd:attribute name 'name' type 'xsd:string'/> </xsdxcomplexType>.
</xsd:element> <!-Filter specifications for this data view <xsd:element name 'FilterList'> <xsd:complexType> <xsd:element name 'Filter' minOccurs max~ccurs 'unbounded'> <xsd:complexlype> <xsd:simpleContent> 693292 _1g6o <xsd: extension base 'xsd: string'> <xsd:attribute name= 'enabled' type ='xsd:boolean'/> </xsd: extension> </xsd: simpleContent> </xsd:complexType> 00 </xsd:element> </xsd:complexType> </xsd:element> Definition for in-line additional presentation characteristics <xsd:element name 'Presentation'> <xsd:complcxType> <xsd: sequence> <xsd:element ref ='Mappings' minOccurs '0'max~ccurs </xsd:sequence> </xsd:complexType> K/xsd:element> </xsd: schema> 693292

Claims (27)

1. A method of associating an ordered data set with at least one slot in a document template, said method comprising the steps of: 00 displaying a representation of said document template; (Ni N displaying a view of data, said view of data identifying at least one ordered Sdata set available for selection; detecting a selection of an ordered data set from said displayed view of data, said ordered data set comprising one or more data members; and associating said selected ordered data set with said at least one slot of said template. 1A. A method according to claim 1 further comprising the step of assembling a set of documents from said document template and said associated ordered data set.
2. A method of associating an ordered subset of a data set with at least one slot in a document template, said method comprising the steps of: displaying a representation of the document template; displaying a view of data, said view of data identifying at least one ordered data set available for selection, at least one member of said at least one ordered data set having a many-to-one relationship with a corresponding member of a master ordered data set; detecting a selection of a member of said at least one ordered data set from said displayed view of data; 693292 O -168- S(d) associating said selected member with said at least one slot of said Sdocument template, wherein said associating defines an ordered subset of said at least one ordered data set, said subset being represented by said selected member and having a one- to-one correspondence with said master ordered data set. 00 (Ni 2A. A method according to claim 2 further comprising the step of assembling a set of Sdocuments from said document template and said defined ordered subset.
3. A method according to claim 1A wherein said selected ordered data set comprises parent nodes of hierarchical data and step further comprises the sub-step of generating a graphical presentation of the hierarchical data for each member of said selected ordered data set, said generated graphical presentations being used to assemble said set of documents.
4. A method according to claim 1 or 2 wherein said at least one slot is created by associating the selection of step with one of a logical or physical position in the displayed representation of said document template. A method according to claim 1A or 2A further comprising, after step the step of: (ba) transforming the members of said at least one ordered data set in preparation for inclusion in said set of documents. 693292 -169-
6. A method according to claim 1 or 2 wherein step comprises creating a new d Sview of data, said new view of data representing a join across a plurality of displayed views of data. 0O 0- 5 7. A method according to claim 1 or 2 wherein said step comprises creating a N new ordered data set, said new ordered data set being derived from a combination of two Sor more ordered data sets displayed in said view of data.
8. A method according to claim 1 or 2 wherein step comprises detecting a user dragging of the selected object and dropping said selected -object over a pre-determined one of said slots of said document template.
9. A method according to claim 8 wherein said at least one pre-determined slot for variable data is visible in said displayed representation of said document template. A method according to claim 8 wherein said at least one pre-determined slot for variable data is not visible in said displayed representation of said document template.
11. A method according to claim 1 or 2 wherein a source of data for said view of data comprises a result of a query directed to a database.
12. A method according to claim 1 or 2 wherein a source of data for said view of data comprises an XML document. 693292 -170- S13. A method according to claim 12 wherein said XML document results from a) executing a query involving one or more databases.
14. A method according to claim 2 wherein said defined subset is derived based on a 00 00 5 position of the corresponding selected member in said ordered data set. A method according to claim 2 wherein said defined subset is derived based on a value of the corresponding selected member of said ordered data set.
16. A method according to claim 3 wherein a display type of said generated graphical presentation varies amongst individual documents of said set of documents, said display type of said generated graphical presentations being dependent on child data of the corresponding data member of the ordered data set.
17. A method according to claim 16 wherein the display type of said generated graphical presentation comprises a table.
18. A method according to claim 16 wherein the display type of said generated graphical presentation comprises a line graph.
19. A method of assembling a set of documents from a document template, said method comprising the steps of: displaying a representation of said document template; 693292 I 171- S(b) displaying a view of data, said view of data identifying at least one ordered d Sdata set available for selection; detecting a selection of at least one ordered data set from said displayed view of data, said ordered data set comprising one or more data members; 00 associating said selected ordered data set with said at least one slot of said (Ni template; and assembling a set of documents from said document template and said associated ordered data set.
20. A method according to claim 19 wherein said selected ordered data set comprises parent nodes of hierarchical data and step further comprises the sub-step of generating a graphical presentation of the hierarchical data for each member of said selected ordered data set, said generated graphical presentations being used to assemble said set of documents.
21. A method according to claim 19 wherein: at least one member of said at least one ordered data set has a many-to-one relationship with a corresponding member of a master ordered data set; (ii) step detects selection of a member of said at least one ordered data set from said displayed view of data; (iii) step associates said selected member with said at least one slot of said document template, wherein said associating defines an ordered subset of said at least one ordered data set, said subset being represented by said selected member and having a one- to-one correspondence with said master ordered data set; and 693292 O -172- O (iv) step assembles said set of documents from said document template and said defined ordered subset.
22. A method according to claim 19 wherein said at least one slot is created by 00 associating the selection of step with one of a logical or physical position in the (Ni N displayed representation of said document template.
23. A method according to claim 19 further comprising, after step the step of: (ba) transforming the members of said at least one ordered data set in preparation for inclusion in said set of documents.
24. A method according to claim 19 wherein step comprises creating a new view of data, said new view of data representing a join across a plurality of displayed views of data. A method according to claim 19 wherein step comprises creating a new ordered data set, said new ordered data set being derived from a combination of two or more ordered data sets displayed in said view of data.
26. A method according to claim 19 wherein step comprises detecting a user dragging of the selected object and dropping said selected object over one of said slots of said document template. 693292 -173- S27. A method according to claim 26 wherein said one slot for variable data is visible in said displayed representation of said document template.
28. A method according to claim 26 wherein said one slot for variable data is not 00 OO visible in said displayed representation of said document template.
29. A method according to claim 19 wherein a source of data for said view of data comprises a result of a query directed to a database and comprises an XML document and said XML document results from executing a query involving one or more databases. A method according to claim 21 wherein said defined subset is derived based on one of a position of the corresponding selected member in said ordered data set, and a value of the corresponding selected member of said ordered data set.
31. A method according to claim 20 wherein a display type of said generated graphical presentation varies amongst individual documents of said set of documents, said display type of said generated graphical presentations being dependent on child data of the corresponding data member of the ordered data set.
32. A method according to claim 31 wherein the display type of said generated graphical presentation is selected from the group consisting of a table and a line graph.
33. A method of associating an ordered data set with at least one slot in a document template for the creation of a set of documents, said method being substantially as 693292 -174- o descried herein with reference to any one of the embodiments as that embodiment is illustrated in Figs. 65 to 71 of the drawings.
34. A computer readable medium, having a program recorded thereon, where the 00 program is configured to make a computer execute a procedure to associating an ordered data set with at least one slot in a document template for the creation of a set of Sdocuments according to the method of any one of claims 1 to 34. A computer readable medium according to claim 34 wherein said program presents a graphical user interface to a user of said computer by which detections of said selections and said associations are performed.
36. Computer apparatus configured to perform the method of any one of claims 1 to 34. Dated the TENTH day of DECEMBER 2004 CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant Spruson&Ferguson 693292
AU2004237874A 2003-12-23 2004-12-10 Method for Specifying Data for the Assembly of a Document Set Abandoned AU2004237874A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2004237874A AU2004237874A1 (en) 2003-12-23 2004-12-10 Method for Specifying Data for the Assembly of a Document Set

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AU2003907199 2003-12-23
AU2003907199A AU2003907199A0 (en) 2003-12-23 Method for Specifying Data for the Assembly of a Document Set
AU2004237874A AU2004237874A1 (en) 2003-12-23 2004-12-10 Method for Specifying Data for the Assembly of a Document Set

Publications (1)

Publication Number Publication Date
AU2004237874A1 true AU2004237874A1 (en) 2005-07-07

Family

ID=34750764

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2004237874A Abandoned AU2004237874A1 (en) 2003-12-23 2004-12-10 Method for Specifying Data for the Assembly of a Document Set

Country Status (1)

Country Link
AU (1) AU2004237874A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114548062A (en) * 2022-04-27 2022-05-27 成都瑞华康源科技有限公司 Report arranging method
US11488269B2 (en) 2018-09-06 2022-11-01 Side, Inc. Blockchain-based system and method for listing document transformation and accountability

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11488269B2 (en) 2018-09-06 2022-11-01 Side, Inc. Blockchain-based system and method for listing document transformation and accountability
US11557011B1 (en) 2018-09-06 2023-01-17 Side, Inc. Blockchain-based system and method for document transformation and accountability
US11676229B2 (en) 2018-09-06 2023-06-13 Side, Inc. System and method for document transformation and accountability
US11734781B2 (en) 2018-09-06 2023-08-22 Side, Inc. Single-tier blockchain-based system and method for document transformation and accountability
US11748831B2 (en) 2018-09-06 2023-09-05 Side, Inc. System and method for document transformation
US11803923B1 (en) 2018-09-06 2023-10-31 Side, Inc. Blockchain-based system and method for purchase document transformation and accountability
US11869107B2 (en) 2018-09-06 2024-01-09 Side, Inc. Multi-tier blockchain-based system and method for document transformation and accountability
CN114548062A (en) * 2022-04-27 2022-05-27 成都瑞华康源科技有限公司 Report arranging method
CN114548062B (en) * 2022-04-27 2022-08-02 成都瑞华康源科技有限公司 Report arranging method

Similar Documents

Publication Publication Date Title
US7644361B2 (en) Method of using recommendations to visually create new views of data across heterogeneous sources
US7574652B2 (en) Methods for interactively defining transforms and for generating queries by manipulating existing query data
US20050060647A1 (en) Method for presenting hierarchical data
JP3842573B2 (en) Structured document search method, structured document management apparatus and program
Braga et al. XQBE (XQ uery B y E xample) A visual interface to the standard XML query language
Walmsley XQuery
US7991805B2 (en) System for viewing and indexing mark up language messages, forms and documents
US7249316B2 (en) Importing and exporting markup language data in a spreadsheet application document
US8688747B2 (en) Schema framework and method and apparatus for normalizing schema
US7363581B2 (en) Presentation generator
US20050022115A1 (en) Visual and interactive wrapper generation, automated information extraction from web pages, and translation into xml
US7530015B2 (en) XSD inference
JP2004265405A (en) Method and system for converting hierarchical data structure of schema base into flat data structure
Sengupta et al. XER-extensible entity relationship modeling
Pluempitiwiriyawej et al. A classification scheme for semantic and schematic heterogeneities in XML data sources
Choi et al. VXQ: A visual query language for XML data
AU2003270989B2 (en) Method of Using Recommendations to Visually Create New Views of Data Across Heterogeneous Sources
AU2004237874A1 (en) Method for Specifying Data for the Assembly of a Document Set
AU2003270985A1 (en) Method for Presenting Hierarchical Data
AU2003204824A1 (en) Methods for Interactively Defining Transforms and for Generating Queries by Manipulating Existing Query Data
Chawathe Managing change in heterogeneous autonomous databases
KR20020057709A (en) XML builder
Choi et al. Visual specification and optimization of XQuery Using VXQ
Zoller et al. WEBCON: a toolkit for an automatic, data dictionary based connection of databases to the WWW
Zhao A visual XML query interface

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period