AU2024200855A1 - Handler object for workflow management system - Google Patents
Handler object for workflow management system Download PDFInfo
- Publication number
- AU2024200855A1 AU2024200855A1 AU2024200855A AU2024200855A AU2024200855A1 AU 2024200855 A1 AU2024200855 A1 AU 2024200855A1 AU 2024200855 A AU2024200855 A AU 2024200855A AU 2024200855 A AU2024200855 A AU 2024200855A AU 2024200855 A1 AU2024200855 A1 AU 2024200855A1
- Authority
- AU
- Australia
- Prior art keywords
- workflow
- data
- database
- user
- management
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2308—Concurrency control
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0633—Workflow analysis
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Databases & Information Systems (AREA)
- Marketing (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Operations Research (AREA)
- General Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Development Economics (AREA)
- Data Mining & Analysis (AREA)
- Game Theory and Decision Science (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Computer And Data Communications (AREA)
Abstract
A method of ensuring data integrity when processing multiple simultaneous transactions
using cloud-native resources is implemented by a handler object. The method comprises
receiving a plurality of action requests, and then executing, as a result of the requests, a
corresponding plurality of actions. If, during the executing, an error occurs in at least one
action, then all actions being executed concurrently are cancelled when the error occurs.
1/27
(.0
99 IILS
CC
II~ V wIE w~ I~
-nuN
0 E
00 N
0
0Q
C UJ
0
C j
0 (NI
00T
Description
1/27
I LS (.0
99
wIE II~ V w~ I~ -nuN 0 E
00 N 0
0Q
0 C j 0 (NI 00T
Technical Field
[0001] The present disclosure broadly relates to project workflow management and, more particularly, to a system for, and a method of, managing workflow in ongoing large, complex projects. The disclosure also relates to methods of ensuring data integrity when processing multiple simultaneous transactions using cloud-native resources.
Background
[0002] Large maintenance projects, such as ongoing building maintenance that includes various tasks performed by different services providers, can be difficult to manage. Housing providers, government departments, charitable organisations etc. responsible for property maintenance typically need to navigate complex work flows in order to scope, quote and manage actual work delivery to maintain their assets in a compliant state for tenants to occupy.
[0003] It would be useful to have a single source solution that enables users to be able to simplify work flows without the need of leveraging other independent solutions.
[0004] In large and complex systems, like the type of solution that would be able to support this type of multi-user cooperation and interoperability, several actions may occur simultaneously (for example as a result of multiple users accessing resources at the same time). This creates situations where actions need to be queued, occur out of order, are unsynchronised, and/or could result in performance degradation. Also, a single action from a user may involve updates to multiple different resources (such as to databases, file stores, and/or event queues). This creates situations where these updates must all succeed in its entirety or not at all.
[0005] Any discussion of documents, acts, materials, devices, articles or the like which has been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each claim of this application.
Summary
[0006] The systems and methods described herein support a single source solution that enables users to be able to simplify work flows without the need of leveraging other independent solutions. The solution is both simple and effective to use, but is nevertheless comprehensive in output.
[0007] In one aspect, there is provided a method of ensuring data integrity when processing multiple simultaneous transactions using cloud-native resources. The method is implemented by a handler object and comprises: receiving a plurality of action requests; executing, as a result of the requests, a corresponding plurality of actions; and if, during the executing, an error occurs in at least one action, then cancelling all actions being executed concurrently when the error occurs.
[0008] At least one action may be associated with at least one non-database cloud based resource. The at least one non-database cloud-based resource may include one or more of: a file storage and a queue.
[0009] At least one action may be associated with a cloud-based database.
[0010] Executing the plurality of actions may comprise executing the actions in parallel and asynchronously.
[0011] In another aspect there is provided a workflow management system. The system may be used for workflow management optimisation. The system comprises one or more workflow management clients configured to provide a user interface for inputting data to the system and displaying data provided by the system; a workflow management server in communication with the workflow management clients; and at least one database configured to store workflow management records, wherein the workflow management server is configured to use the method described above to: receive workflow data from the one or more workflow management clients; access the at least one database to store and retrieve the workflow data; and process the workflow data in order to output to a user, via the one or more clients, workflow notifications.
[0012] At least one workflow management client maybe configured to receive a user input that adds, changes, and/or removes workflow data from the workflow management records stored on the at least one database.
[0013] Throughout this specification the word "comprise" or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
Brief Description of Drawings
[0014] Embodiments of the disclosure are now described byway of example with reference to the accompanying drawings in which:
[0015] Figure 1 is a schematic representation of the system architecture of a workflow management system.
[0016] Figure 2 is a schematic representation of an architecture topology of the cloud server in Figure 1.
[0017] Figure 3 is a schematic representation of a database structure.
[0018] Figure 4 shows amain dashboard of a user interface.
[0019] Figure 5 shows a Scopes user interface.
[0020] Figure 6 shows a Quotes user interface.
[0021] Figure 7 shows a Work Orders user interface.
[0022] Figure 8 shows a Purchase Orders user interface.
[0023] Figure 9 shows a Properties user interface.
[0024] Figure 10 shows a Reports user interface.
[0025] Figures 11A-I1D show example embodiments of a mobile application's user interface.
[0026] Figures 12A-12C show examples of code implementing embodiments of a Local Transaction function.
[0027] Figure 13 shows a flow diagram of an embodiment of a function.
[0028] Figure 14 shows a flowchart of an example embodiment of an "open local transaction" method step.
[0029] Figure 15 shows a flowchart of an example embodiment of an "add task" method step.
[0030] Figure 16 shows a flowchart of an example embodiment of a "cancel local transaction" method step.
[0031] Figure 17 shows a flowchart of an example embodiment of a "complete local transaction" method step.
[0032] Figure 18 shows a flowchart of an example embodiment of a "save context changes" method step.
[0033] Figure 19 shows a flowchart of an example embodiment of a file process.
[0034] Figure 20 shows a flow diagram of an embodiment of a method to retrieve a list of "Scope Item" models.
[0035] Figure 21 shows a flowchart of an example embodiment of a "Get Object for Key" process.
[0036] Figure 22 shows a flowchart of an example embodiment of a "Get Objects for List of Keys" process.
[0037] Figure 23 shows a flowchart of an example embodiment of a "Query if Object Exists for Given Query" process.
[0038] Figure 24 shows a flowchart of an example embodiment of a "Query for Objects matching Given Query" process.
[0039] Figure 25 shows a flowchart of an example embodiment of a "Query for Objects matching Given Query and List of Keys" process.
[0040] Figure 26 shows a flowchart of an example embodiment of a "Get Child Objects for Parent Key" process.
[0041] Figure 27 shows a flowchart of an example embodiment of a "Get Object Models matching List of Keys" process.
[0042] In the drawings, like reference numerals designate similar parts.
Detailed Description
1. System overview
[0043] Figure 1 is a schematic representation of the system architecture of a workflow management system 100. The workflow management system 100 includes one or more workflow management clients 102 configured to provide a user interface for inputting data to the system and displaying data provided by the system. The system 100 includes a workflow management server 120 in communication with the workflow management clients 102, and at least one database 126 configured to store workflow management records. The workflow management server 120 is configured to receive workflow data from the one or more workflow management clients 102, access the at least one database 126 to store and retrieve the workflow data; and process the workflow data in order to output to a user 104, via the one or more clients 102, workflow notifications 118. At least one workflow management client 102 is configured to receive a user input that adds, changes, and/or removes workflow data from the workflow management records stored on the at least one database 126.
[0044] In the exemplary embodiment shown in Figure 1, there are two client platforms 102 to provide functionality to the end user 104. The first client platform 102 is a web application 106 that provides functionality to the end user 104 via a web-based Graphical User Interface (GUI) supported by a web browser, for example on a desktop or laptop computer 108. The second client platform 102 is a mobile application 110, or "app", that is installed and run on a mobile device 112 such as a mobile phone or mobile tablet. The app provides functionality to the end user via a GUI of the app. Examples of the mobile app user interface can be seen in Figures 11A-D.
[0045] In some embodiments, the app can run on a mobile or tablet device that is supported by the Android or iOS mobile operating systems, and the app may be downloaded and installed from the Apple Store or Google Play.
[0046] In one embodiment, the server 120 is run on an Azure cloud-based server architecture. The server 120 supports file storage by a storage account 122, system meta data storage in a meta data database 124, and subscriber transactional data storage in subscription data database 126. The server 120 supports client based services via a Web API, and supports system based services by function applications 130 that include at least a communication function 132 and an automated processes function 134.
[0047] Data inputs and processing by the system 100 are indicated in Figure 1 and summarised in Table A below.
[0048] Client workflow data updates 140 on workflow business objects (including Scope, Quote, and Work Order) by the end user are sent to the server 120 where the data is validated and then prepared for saving. Activity data is prepared for saving, and a notification queue item is added to a processing queue. The data transaction is saved in the subscriber database 126. A response 142 is sent to the client via email 144 and/or SMS message 146. The relevant updates are reflected and displayed in the web application 106, for example via dashboard statistics and charts 148 that can be viewed by the end user 104 on demand. A record of the change is recorded (both the previous value and the new value), and this is reflected and displayed on the web application 106, for example via a displayed activity list that can be viewed by the end user 104. In some embodiments, the information displayed via the web application is also displayed via the mobile app 110.
INPUT PROCESS OUTPUT Client Workflow data updates Update data sent to server. Response to the client on workflow business objects Update data validated. (Scope, Quote, Work Order) Update data prepared for by end user. saving. Notification email and/or Activity data prepared for SMS sent to recipient. saving. Updates reflected and Notification queue item displayed in the web added to processing queue. application Dashboard Data transaction saved in statistics and charts to be Subscriber database. viewed by the end user on demand. Log of change (previous value and new value) reflected and displayed on the web application Activity List to be viewed by the end user. Client Configuration data Update data sent to server. Response to the client update by end user. Update data validated. Configuration data value Update data prepared for (such as a select option) saving. made available for utilisation Activity data prepared for for Workflow data updates. saving. Log of change (previous value Data transaction saved in and new value) reflected and Subscriber database. displayed on the web application Activity List to be viewed by the end user. Client Workflowfile upload Data sent to server. Response to the Client for related workflow business Data validated. File available to be viewed by objects (Scope, Quote, Work Data prepared for saving. the end user on a client. Order) by end user. Activity data prepared for saving. Data transaction saved in Subscriber database. File data saved to Subscriber file storage location.
Client Workflow photo upload Data sent to server. Response to the Client for related workflow business Data validated. File available to be viewed by objects (Scope, Quote, Work Data prepared for saving. the end user on a client. Order) by end user. Activity data prepared for Photo taken using the device saving. camera. Data transaction saved in Subscriber database. File data saved to Subscriber file storage location.
Web Application Report Report Data retrieved from Response to Client as Request posted to the server. the Subscriber database. available file download in the Report Data populated into browser. an excel spreadsheet object. Excel Spreadsheet object returned to the client as file content. Table A
[0049] A client configurationdata update by the end user is sent to the server 120, where the update data is validated and prepared for saving. Activity data is prepared for saving, and the data transaction is saved in the subscriber database 126. A response 142 is sent to the client, and the configuration data value (such as a user-selected option) is made available for utilisation in workflow data updates. A log or record of the change is recorded (the previous value and the new value), and the change is displayed on the web application and/or mobile application activity list that can be viewed by the end user.
[0050] A client workflowfile upload for related workflow business objects (including Scope, Quote, and/or Work Order) by the end user is sent to the server 120 where the data is validated and prepared for saving. Activity data is prepared for saving, and the data transaction is saved in the subscriber database 126. File data is then saved to the subscriber file storage 122. A response 144 is sent to a client 102 (e.g. one or more client platforms), and a file with the relevant information is made available to be viewed by the end user 104 on one or more client platforms 102.
[0051] A Client Workflow photo upload for related workflow business objects (including Scope, Quote, and/or Work Order) by the end user, for example where a photo is taken using the device camera, is sent to the server 120 where the data is validated. The data is prepared for saving, activity data is prepared for saving, and the data transaction is saved in the subscriber database 126. File data is then also saved to the subscriber file storage 122. A response 144 is sent to a client 102 (e.g. one or more client platforms), and a file with the relevant information is made available to be viewed by the end user 104 on one or more client platforms 102.
[0052] When a Web Application Report Request is posted to the server 120, report data is retrieved from the subscribed database 126. Report data is populated into a spreadsheet object, e.g. an Excel spreadsheet object, and the spreadsheet object is returned to the client 102 as file content. A response is provided to one or more of the client platforms 102 in the form of an available file download, e.g. via a browser in the web application, via the mobile app 110, via email, and/or via SMS message.
[0053] Figure 2 is a schematic representation of an architecture topology 200 of the cloud server 120 in Figure 1. The cloud server includes various cloud components:
[0054] Web Application 106: The web application 106 is served to the browser as a static web page 210 from the cloud Content Delivery Network interface 202. "Static web page 210" is the location of the webroot containing all web resources for the Web Application.
[0055] Web API 204: This is the Server interface to the client 102 (web application and mobile app).
[0056] Function Apps 130: Utilising the Azure Function App technology, these are utility apps to perform system functions, i.e., functions that are not the direct result of end user interactions with a GUI platform.
[0057] Storage Account 206: The storage account 206 is used for storage of system file Meta data in the Meta Data database 124 and Subscription file storage 122.
[0058] File data is received from a first file database 212 and stored in a Files storage 214. Message data is received from a second message database 216 and is stored in the failure message queue 218 or the integration message queue 220. The first file database 212 is the conduit for the Private Endpoint connecting the Virtual Network that hosts the software applications to the File System that is hosted within the Storage Account. The first file database 212 supports securing access to the file storage location in the Storage Account.
[0059] The Files storage 214 is a file system that contains files related to the functioning of the software applications in a file storage location within the File System. The Files storage 214 contains all files related to a Subscriber's software instance in its own separate file storage location within the File System.
[0060] The second database 216 is the conduit for the Private Endpoint connecting the Virtual Network that hosts the software applications to the File System Queues that are hosted within the Storage Account. The failure message queue 218 is a storage location that is used as a system log for Function App processing failures. The integration message queue 220 is a storage location that is used to queue processing actions for Function App processing.
[0061] The Virtual network 230 is the secure network host for the software applications. The Subnet 232 within the Virtual Network 230 hosts the endpoints to other resources in the Cloud Native hosting topology outside of applications. This secures access to the other resources such as Storage Account and KeyVault 234. The KeyVault 234 is an Azure resource for storing sensitive information securely such as authentication secrets, connection strings and identifiers that are used by the applications. Subnet 236 within the Virtual Network 230 is used to securely host the Relational Database Management System.
[0062] Function app - integration 240 is an Azure Function App that processes the system actions that are logged in the Integration Message Queue. The Function app mailer 242 is an Azure Function App that pushes system messaging.
[0063] Single Sign on Service: Utilising the Azure B2C identity management service 208. A User Profile can be added to a Subscription using the email address of an identity provider. Upon signing up the User Profile is matched to the user's account for the identity provider to allow the system to authorise access to the system using the login credentials and identity for the user from the identity provider. In the exemplary embodiment, identity providers supported by the system are Microsoft, Facebook, Google, Apple. In some embodiments, the server 120 uses token-based authentication, and in the client-server model HTTP requests made by the client 102 (web and/or mobile) to the application interface 202 are authenticated with an encrypted token to verify the identity of the user 104 and ensure authorised access.
[0064] Figure 3 is a schematic representation of a database structure 300. Transactional and Configuration data is stored in databases hosted by a Relational Database Management System 302.
[0065] The System database 304 stores the Meta Data for the system. Meta data is made up of information required by the system for the system to operate. This includes information about Subscriptions and Users of the system as well as a base configuration for the system. The Subscriber Database 306 includes a Subscription database instance 308 per subscriber.
[0066] For scalability, optimised performance and to comply with isolation of data stipulations, the system architecture is structured to provide a separate database and file storage location for each system Subscription.
2. Software as a Service
[0067] The workflow management system is made available as a Software as a Service (SaaS) system, being an online system provided via cloud computing where all data and services are hosted on the cloud. Utilisation of the system is facilitated by a Subscription to the SaaS service. The system is presented to the Subscription end users as a virtual instance of the system. Use of the system by end users is facilitated by User Profiles which are assigned to end users under a Subscription. All operations carried out by an end user via their User Profile are applied within the Subscription.
[0068] As described with reference to Figure 1, the system functions on a client server model. Functionality is provided to the end user via client. Services to the client are provided by the server. All data at rest is centralised at the server.
[0069] The subscription governs the way in which as subscriber can use the system. The Subscription is setup by administrators.
[0070] Subscriptions are provisioned with: - A Subscription Administrator user, which is a User Profile that is used by the Subscriber for their own self service of Subscription system tasks such as User Profile administration. A Subscription Administrator can access the Web or App under their Subscription. - A database instance to contain the subscriber's transactional data. - A file store which contains all subscribers upload files.
[0071] Colour Usage: The system uses colours to indicate operational areas, for example the use of a distinctive purple colour to identify work orders. This use of colours is material to the intuitive use of the system as it allows for instant recognition by the user of the area and the relevant functionality.
[0072] The system comprises innovative use of colours to improve readability and the memory recall enabled by the hippocampus and thalamus of the uses of the system (for example as described in https://en.wikipedia.org/wiki/Color psychology and https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3743993/ ).
[0073] System Flows: The system has a unique integrated concept of "Flowing" from one user functional area to another. The intent is to integrate the required workflow in a way that has a natural feel but also ensures process and regulatory compliance. This is achieved by prominent call to action items that provide a breadcrumb approach to a user pathway. It is the very simplicity of this approach that creates the uniqueness in the way that users are able to navigate the application to bring about their required business outcomes.
[0074] Figure 4 shows the main dashboard 400 of the user interface. On the left can be seen the main menu items of the system. Each of these menu items is associated with a different user interface display: the dashboard 400, scopes 500, quotes 600, work orders 700, purchase orders 800, properties 900, reports 1000, and settings 1002. The dashboard 400 includes a scorecard 410 that provides a colour-coded overview of the total number of work orders, purchase orders, defects, recalls, scopes, and quotes that are open and overdue, and also provides a summary for each in the form of the percentage overdue. The dashboard 400 includes a purchase order (PO) breakdown 420 according to priorities assigned to the purchase orders. The PO breakdown 420 is shown as an easily understood infographic, in this example a pie chart 422 showing the proportion of open purchase orders in each priority category 424. The information displayed is user configurable via one or more user-selectable parameters; in this example embodiment two parameters are selectable via drop down menus 426.
[0075] The dashboard 400 includes a graph 430 of PO volumes by priority over time, and a summary 440 of contractor statistics (including total costs, overdue PO's, number of defects, and recalls). An overview of actionable items 450 is displayed in the centre of the dashboard 400. In this embodiment the overview 450 indicates where (and for how many instances) data entry and/or review is required for work orders, purchase orders, defects, recalls, scopes, and quotes. Work type analysis 460 is displayed that summarises the number of jobs over time for various work types, including electrical, asbestos, appliances, blinds, bricklaying, carpentry etc. The percentage increase or decrease 462 in work type from a previous month to a current month is displayed.
[0076] A time KPI % graph 470 over time displays a time KPI measure for each of work orders, purchase orders, defects, scopes, and/or quotes. In some embodiments, the dashboard 400 may include a map 480 showing the location of one or more contractors.
[0077] Figure 5 shows the Scopes user interface 500 that is displayed when the scopes main menu item is selected. Displayed is a sortable, configurable, and searchable list ofjob scopes that each have an associated scope number, allocated contractor, address, status, scope owner (or other associated user), issue date, due date, and/or completion date. The interface allows a user to create a new record, search records, and export one or more records.
[0078] Figure 6 shows the Quotes user interface 600 that is displayed when the quotes main menu item is selected. Similar to the Scopes user interface 500, the Quotes user interface 600 displays a sortable, configurable, and searchable list of job quotes that each have an associated quote number, allocated contractor, address, status, quote owner (or other associated user), issue date, due date, and/or completion date. The interface allows a user to create a new record, search records, and export one or more records.
[0079] Figure 7 shows the Work Orders user interface 700 that is displayed when the work orders main menu item is selected. Displayed is a sortable, configurable, and searchable list of work orders that each have an associated work order number, allocated contractor, address, status, work order owner (or other associated user), issue date, due date, completion date, and/or cost. The interface allows a user to create a new record, search records, and export one or more records.
[0080] Figure 8 shows the Purchase Orders user interface 800 that is displayed when the purchase orders main menu item is selected. Displayed is a sortable, configurable, and searchable list of purchase orders that each have an associated purchase order number, respective work order number, allocated contractor, address, status, purchase order owner (or other associated user), due date, completion date, and/or cost. The interface allows a user to search records, and export one or more records.
[0081] Figure 9 shows the Properties user interface 900 that is displayed when the properties main menu item is selected. Displayed is a sortable, configurable, and searchable list of properties associated with one or more tasks that have associated scopes, quotes, work orders, and/or purchase orders. Each record includes one or more property descriptions and/or property-specific data entries including a property reference, address, an internal reference, and/or a property type. The interface allows a user to create a new record, search records, and export one or more records.
[0082] Figure 10 shows the Reports user interface 1000 that is displayed when the reports main menu item is selected. A dropdown menu 1004 allows a user to select a report to view and/or download relating to one or more scope, quote, work order, purchase order, and/or property records.
[0083] The settings menu item 1002 provides access to a settings user interface (not shown). Settings submenus that result in the display of separate settings interfaces where a user can view and/or amend settings are provided, including submenus for rates 1006, companies 1008, contractors 1012, areas 1014, projects 1016, and fields 1018.
3. Overview of the main system flows
[0084] Tables 1 to 17 summarise various system flows that describe how users of the workflow management system 100 enter, retrieve, and/or manage workflow related data. The system is used by a variety of users, including: - contractors who provide services, referred to as "external users", and - primary users associated with an entity responsible for the workflow management (such as an overall maintenance provider), referred to as "internal users".
[0085] In the tabled workflows, one or more of the steps may be omitted or changed, and/or the order of steps may be rearranged, while still achieving a similar result.
[0086] Table 1 describes how a new scope record is created, reviewed and approved. This is an internal scope without higher review. In some embodiments, a pre-requisite may be that an internal user exists as an active business entity.
1 Workflow 1. Login > Home > Create 1. New Scope created and management New > Scope assigned to Internal system User 2. Enter required details user (any) a. Allocate to Internal 2. Scope status= user [Assigned] 3. Save &Continue 3. Scope sent (email/SMS) 4. Send Changes to Internal user
2 Workflow 1. Open Scope 1. Scope status= management 2. Record Scope results by [Reviewed
& system User adding items to scope Approved] (Internal user) 3. Accept all items 4. Update Scope statusto
[Reviewed & Approved] 5. Save Scope Table 1
[0087] Table 2 describes how a new scope record is created, reviewed and approved, and subsequently modified. This relates to an internal scope with higher review. In some embodiments, a pre-requisite may be that an internal user exists as an active business entity
1 Workflow 1. Login > Home > Create New > 1. New Scope created and management Scope assigned to Internal system User (any) 2. Enter required details user a. Allocate to Internal 2. Scope status= user [Assigned] 3. Save & Continue 3. Scope sent (email/SMS) 4. Send Changes to Internaluser 2 Workflow 1. Open existing Scope 1. Scope updated with management 2. Record Scope results by scope of work system User adding items to scope 2. Scope status = [Review 'Internal user) 3. Update Scope status to Required]
[Review Required] 4. Save Scope 3 Workflow 1. Open existing Scope 1. Scope status = management 2. Modify Scope items where [Reviewed &
system User (any) required (eg. adjust qty, Approved] remove item, add item, adjust comment, etc.) 3. Review by Accepting or Rejecting all items 4. Update Scope statusto
[Reviewed & Approved] 5. Save Scope Table 2
[0088] Table 2 describes how a new scope record is created, reviewed and approved, and subsequently updated following completed work. This relates to external scope without higher review. In some embodiments, a pre-requisite may be that the contractor exists as an active business entity
1 Workflow 1. Login > Home > Create 1. New Scope created and assigned to management New > Scope Contractor system User 2. Enter required details 2. Scope status = [Assigned] (any) a. Allocate to 3. Scope sent (email/SMS) to Contractor Contractor 3. Save &Continue 4. Send Changes 2 Contractor 1. Conduct scope of 1. Workflow management system work user has copy of scope of work 2. Return scope of work to Workflow management system user 3 Workflow 1. Open existing Scope 1. Scope updated with scope of work management 2. Record Scope results 2. Scope status = [Reviewed
& system User by adding items to Approved] (any) scope 3. Review items by Accepting or Rejecting 4. Update Scope status to [Reviewed &
Approved] 5. Save Scope Table 3
[0089] Table 4 describes the creation, review, assignment and completion of a quote request end to end.
1 Workflow 1. Login > Home > Create New 1. New Quote created management > Quote 2. Invitation(s) to Quote sent system User 2. Enter required details (email/SMS) to (any) 3. Save & Continue Contractor(s) 4. Add items to be quoted 3. Quote status = Quote 5. Invite Contractor(s) to Request Sent provide a Quote 4. Contractor status = Sent 6. Save & Send 2 Contractor(s) 1. Return Quote to Workflow 1. Workflow management management system user system user has copy of Quote 3 Workflow 1. Open existing Quote 1. Contractor updated with management 2. Update Contractor with Pricing system User details from their returned 2. Quote status = Quote (any) Quote (ie. pricing, items, Request Sent comments, Date Quote Received, etc) 3. Update Contractorstatus to
[Responded] 4. Save 4 Workflow Repeat above until all Contractor 1. All Contractors are in one management quotes have been recorded of the following statuses: system User a. [Responded] (at (any) Note: If Contractor declines to least one) auote, update Contractor status to b. [Declined to Quote]
[Declined to Quote] c. [Cancelled] 2. Quote Request status= Note: If a Contractor Quote is not Quote Review Required required, update status to
[Cancelled] Workflow 1. Tick winning Contractor 1. Ticked Contractor status= management 2. Save Awarded system User 2. Quote Request status= (any) Quote Awarded Table 4
[0090] For the quote requests described in Table 4, in some embodiments one or more of the following features are provided: - a new quote picks up the full address of the associated property; - the related scope is displayed; - special project comments with project-specific information may be included, removed, and/or edited in a record; - records may include one or more relevant phone numbers;
- records include one or more associated company names; - new quote and scope records are highlighted; - records include default fields, for example "not applicable" for maintenance required; - records may include detailed descriptions associated with tasks; - reference numbers may be autogenerated; and - multiple items may be added, amended and/or removed at a time.
[0091] Table 5 describes creating a quote from a scope.
1 Workflow 1. Open a Scope in 1. Quote is created with items management [Reviewed & from the Scope appearing in system User Approved] status the Items tab (any) 2. New Quote 2. Quote status = Created Request 3. Enter any missing details required to create Quote (eg. Target Date) 4. Save &Continue Table 5
[0092] Table 6 describes the creation of a work order from a scope.
1 Workflow 1. Open a Scope in 1. Work Order is created with items management [Reviewed & from the Scope appearing in the system User Approved] status Items tab (any) 2. New Work Order 2. Work Order status = Created 3. Enter any missing details required to create Work Order (eg. Target Date, Priority, etc) 4. Save &Continue Table 6
[0093] Table 7 describes creating a work order from a quote.
1 Workflow 1. Open a Quote in [Quote 1. Work Order is created with items management Awarded] status from the Quote appearing in the system User 2. New Work Order Items tab (any) 3. Enter any missing details 2. Item pricing on Work Order required to create Work matches Item Pricing on Quote Order (eg. Target Date, 3. Work Order status = Created Priority, etc) 4. Save & Continue Table 7
[0094] Table 8 describes a basic single contractor order end-to-end.
1 Norkflow 1. Login > Home > Create 1. New Work Order created management New > Work Order 2. Work Order status = [Created] ;ystem User 2. Enter required details 'any) 3. Save & Continue 2 orkflow 1. (Items tab) Add 1. Work Order Status = [Allocated] anagement 2. Enter following: 2. Item added to Item list system User a. SOR 3. Purchase Order added to Purchase 'any) b. Location Order list c. Quantity 4. Purchase Order status = [Allocated] d. Allocated To e. Target (Date) 3. Save 4. Send Changes 3 Norkflow 1. Set Date Work Started for 1. Purchase Order Status = In Progress management PO 2. Work Order Status = In Progress system User 'any) 4 W~orkflow 1. Set Quality result (Items 1. Work Order status = [Reviewed &
management tab) for item asV/ Approved] ystem User 2. Purchase Order status = [Reviewed &
(any) Approved] Table 8
[0095] Table 9 describes allocating a multi contractor order using bulk allocation.
1 Norkflow 1. Login > Home > Create New 1. New Work Order created 'management > Work Order 2. Work Order status= ;ystem User 2. Enter required details [Created] (any) 3. Save & Continue
2 Workflow 1. (Items tab) Add 1. Items added to Item list management 2. Enter following: ;ystem User a. SOR 'any) b. Location c. Quantity 3. Save 4. Repeat to add at least one more item 3 Workflow 1. Bulk Allocate Items 1. Work Order Status= management 2. Add Contractor [Allocated] system User 3. Enter required details 2. Purchase Order added to (any) 4. Save Purchase Order list for 5. Repeat to add at least one each Contractor more Contractor 3. Purchase Order status= 6. Expand a Contractor [Allocated] 7. Select at least one item 8. Dragand dropto expanded Contractor 9. Expand a different Contractor 10. Select at least one item 11. Dragand dropto expanded Contractor 12. Repeat until all items allocated across all Contractors 13. Send Changes Table 9
[0096] Table 10 describes the process to review, approve, and/or inspect a purchase order. In some embodiments all purchase orders may be selected and approved in one step.
1 Workflow 1. Open PO in [Ready for 1. Work Order opens management Review] status on items tab system User (any) 2 orkflow 1. Filter Purchase Order # 1. Selecting/ management column to show only PO requires no further system User selected in previous Phase action (any) 2. In Quality column, 2. Selecting x select V or x for each item expands to allowing I adding of defect Table 10
[0097] Table 11 describes the process to review, approve, and/or inspect a work order, including the option of indicating a defect.
1 Workflow 1. Open WO in [Ready for 1. Selecting v/ requires no management Review] status further action system User 2. In Quality column, select/ 2. Selecting x expands to (any) or x for each item allowing adding of defect 3. Send Changes Table 11
[0098] Table 12 describes creating a non-quantity type defect purchase order, for example for an at fault contractor, and/or resolving a contractor match)
1 Workflow 1. Open PO in [Ready for Review] 1. Work Order opens on management status items tab ;ystem User 2. In Quality column, select x for 'any) an item 2 Workflow 1. Filter Purchase Order # column management to show only PO selected in ystem User previous Phase any) 2. In Quality column, select / or X for each item 3 Workflow 1. Enter following: 1. Defect PO added to management a. Review Result (not Purchase Order list for system User quantity type) Allocated To (any) b. Comment Contractor c. Defect Priority a. Status= 2. Save Allocated 3. Send Changes b. $ Amount = 0 Table 12
[0099] Table 13 describes creating a non-quantity type defect purchase order, for example for an at fault contractor, and/or to resolve a contractor difference or change.
Phas Who Stp Oucm
1 Norkflow 1. Open WO in [Ready for 1. Selecting x expands to management Review] status allowing adding of defect system User 2. In Quality column, select x for 'any) 3n item 2 Workflow 1. Enter following: 11. Defect PO added to management a. Review Result (not Purchase Order list for At Fault system User quantity type) Contractor 'any) b. Comment a. Status = Reviewed c. Defect Priority & Approved d. At Fault Contractor (can b. $ Amount = 0 be same or different to 2. Standard PO added to Allocated To Contractor) Purchase Order list for To e. To Resolve Contractor Resolve Contractor (different to At Fault a. Status = Allocated Contractor) b. $ Amount = As per 2. Save Contractor Pricing 3. Send Changes 3. Recovery PO added to Purchase Order list for At Fault Contractor a. Status = Reviewed &Approved b. $ Amount = value of Standard PO (#2) Return to the items to see the new tems for the added Pos. Table 13
[0100] Table 14 describes creating a non-quantity type recall purchase order, for example for an at fault contractor, and/or to resolve a contractor match.
1 Workflow 1. Open WO in [Ready for 1. Selecting x expands to management review] status allowing adding of defect system User Z. In Quality column, select x (any) :or an item 2 Workflow 1. Enter following: 1. Recall PO added to Purchase management a. Review Result= rder list for Allocated To system User Recall Contractor (any) b. Comment c. Status = Allocated c. Defect Priority d. $ Amount = 0 Save 3._ Send Changes Table 14
[0101] Table 15 describes creating a non-quantity type recall purchase order, for example for an at fault contractor, and/or to resolve a contractor difference or change.
1 Workflow 1. Open WO in [Ready for Review]1. Selecting x expands to management status allowing adding of defect system User 2. In Quality column, select x for 'any) an item 2 Norkflow 1. Enter following: 1. Recall PO added to management a. Review Result = Recall Purchase Order list for At Fault system User b. Comment Contractor 'any) c. Defect Priority c. Status = Reviewed d. At Fault Contractor (can & Approved be same or different to d. $ Amount = 0 Allocated To Contractor) 2. Standard PO added to e. To Resolve Contractor Purchase Order list for To (different to At Fault Resolve Contractor Contractor) c. Status = Allocated 2. Save d. $ Amount = As per 3. Send Changes Contractor Pricing 3. Recovery PO added to Purchase Order list for At Fault Contractor c. Status = Reviewed &Approved d. $ Amount =value of Standard PO (#2) Table 15
[0102] Table 16 describes creating a quantity type defect on an open purchase order.
1 Norkflow 1. Open PO in [Ready for 1. Work Order opens on management Review] status items tab ;ystem User 2. In Quality column, select x 'any) :or an item 2 orkflow 1. Filter Purchase Order # 1. Selecting x expands to management olumn to show only PO selected in allowing adding of defect system User revious Phase (any) . InQuality column, selectix or an item
3 Workflow 1. Enter following: 1. Defect/Recall PO added management a. Review Result to Purchase Order list for system User (quantity type) Allocated To Contractor (any) b. Corrected Qty a. Status = Reviewed c. Comment & Approved d. Defect Priority b. $ Amount = 0 2. Save 2. Quantity of original item 3. Send Changes adjusted to match Corrected Qty
Table 16
[0103] Table 17 describes creating a quantity type defect on a closed purchase order.
1 Norkflow 1. Open WO in [Ready 1. Selecting x expands to allowing management for Review] status adding of defect system User 2. In Quality column, 'any) ;elect x for an item 2 orkflow 1. Enter following: 1. Defect/Recall PO added to management a. Review Result Purchase Order list for Allocated To system User (quantity type) Contractor (any) b. Corrected Qty c. Status = Reviewed
& c. Comment Approved d. Defect Priority d. $ Amount = 0 2. Save 2. Recovery PO added to Purchase 3. Send Changes Order list for Allocated To Contractor a. Status = Reviewed & Approved b. $ Amount = value of qty difference (eg. Original qty x 2, item value = $10 each, Corrected Qty = 1, Recovery Amount = $10) Table 17
[0104] The workflow management system described herein is a solution that optimises business workflows. Advantageously, the user based design has inbuilt positive reinforcement. The system integrates positive reinforcement from gamification methodology into a business workflow application. This can be seen in the use of colourization of the system that evokes a pleasant response to the interface (a "wow" factor that generates instant liking of the user experience)
[0105] Encoding decision making artificial intelligence into the predetermined business pathways (i.e., flowing from one stage to the next stage) within the system to create the path of least resistance for users' day to day operations promotes learning by using the system as required by the business. The system enables self-correction as to the correctness of the activities the user undertakes and develops a reasoning capability by the user to identify business noncompliance actions.
[0106] The experience for the users has a similar affect as you would receive in playing computerised games where there is a "buzz" in having found a way to beat the system, because the workflows can be used or changed flexibly. However, in effect the focus is to have a highly engaged user completing the required workflow in the system to the company's requirements without it feeling like being an imposition on them.
[0107] The solution described herein is data driven to enable wise and timely decisions in the workflows that a business uses by providing a real time integrated view of everything that is happening, what should have happened, or will happen. The workflow management system enhances the user experience by providing the information in an easy-to-understand form, thereby enabling the user to make better and/or more timely decisions.
[0108] As every step within the business workflows is undertaken it is immediately shown on the dashboards. The workflows underpinning the solution are built to work together to ensure that none of the key business process or regulatory compliancy requirements are missed.
[0109] The workflow solution tracks costs from the moment they are received via the quoting module through to invoice payment. It creates a real-time budget of what has been committed via business process which is captured in the workflow and shows what the spend is going to be over time. This allows a business to initiate, manage and close workflows in real time. This workflow data is turned into information that is presented in real time dashboards. Businesses can make decisions or act the moment it is required.
[0110] In the workflows the user can see what is happening and what it is costing the moment the information is available in the workflows. All of this against preselected key performance indicators that ensure that the business workflow is executing as it was intended to do by the user of the system. These key performance indicators are determined by the customer at the time of implementation of the system and then tracked with the system.
[0111] The workflow management system simplifies workflows to get the work done in a way that is enjoyable, safe, and effective.
[0112] Advantageously, analytics can be used to alert the user to things that may happen before they do. With this knowledge businesses can respond efficiently and effectively to opportunities and challenges as they arise in the workflows.
[0113] The system is able to integrate with an ecosystem of other applications to ensure various business processes can be completed successfully.
[0114] The system is also built to accommodate, enable, and reinforce common user work arounds that expediate the ability to get to an outcome. This means there is a procedural flow in the system that can be used from start to finish or users can start anywhere within that flow at the point that enables them to achieve their outcome in the fastest possible time. Advantageously this results in user satisfaction quickly to provide a positive reinforcement of the work effort that is undertaken in the system. It reinforces for the user that the system is working in the way they want to rather than them changing how they want to work because of the system. The system itself provides the user with the opportunity to feel empowered to meet the procedural and company business requirements in the workflow.
[0115] As the system is built on the premise of workflows supporting the business requirements to manage an asset, any business activity that has a primary asset that needs to be managed can be managed by the system. Accordingly, the system can be used by, e.g., local councils to manage their assets such as parks, bridges or roads under their control. It could also be used for health care providers such as NDIS where there is a series of workflows around the patients they care for where the patient is in effect the asset they are supporting.
[0116] Advantageously, the applicability of the system is to almost any business model that has a procedural workflow requirement to manage any type of asset.
4. "Local Transition" (LT) Handler Object
[0117] In large and complex systems, like the workflow management system described herein, several actions may occur simultaneously (for example as a result of multiple users accessing resources of the system at the same time). This creates situations where actions need to be queued, occur out of order, are unsynchronised, and/or could result in performance degradation. Also, a single action from a user may involve updates to multiple different resources (such as to databases, file stores, and/or event queues). This creates situations where these updates must all succeed in its entirety or not at all. Accordingly, the inventors have developed a method to address these issues.
[0118] Actions involving multiple resources, such as updates to a cloud-native database, cloud storage accounts (like blob storage), and queues, are not typically encapsulated within a single handler object in object-oriented programming for a number of reasons, including:
a. Separation of Concerns: It's a fundamental principle in software engineering to separate different concerns or responsibilities. Handling interactions with multiple resources involves distinct concerns such as database management, file storage, and messaging. By keeping these concerns separate, the code becomes easier to understand, maintain, and extend.
b. Scalability and Flexibility: Encapsulating all actions within a single handler can lead to a monolithic design, which may become unwieldy as the system grows or if requirements change. By breaking down functionality into smaller, focused components, it becomes easier to scale and adapt the system over time. For example, if you later need to update the database without affecting file storage, having separate handlers allows for more flexibility.
c. Reusability: Separating concerns allows for better reuse of code. You may have different parts of your system that need to interact with the same resources. By encapsulating resource-specific logic within separate handlers, you can reuse those handlers in different parts of your application without duplicating code.
d. Testing and Debugging: Having smaller, focused handlers makes it easier to test and debug the code. When each handler is responsible for a specific task, it's simpler to isolate issues and verify that each component behaves as expected.
e. Decoupling: Encapsulation promotes loose coupling between components, which is beneficial for maintainability and modularity. If one handler needs to be replaced or updated, it can be done without affecting other parts of the system, as long as the interfaces remain consistent.
f. Concurrency and Performance: In systems where multiple resources are involved, concurrency and performance considerations become crucial. By having separate handlers for each resource, it's easier to manage concurrency issues and optimize performance independently for each component.
[0119] Therefore, encapsulating all actions involving multiple resources within a single handler is typically not a recommended practice due to concerns related to separation of concerns, scalability, flexibility, reusability, testing, decoupling, and performance. Instead, breaking down functionality into smaller, focused components allows for a more maintainable, modular, and adaptable design.
[0120] When a single action involves updates to multiple different resources, ensuring that all updates succeed in their entirety or none at all is crucial for maintaining data consistency and integrity. If updates to some resources succeed but others fail, it can lead to various problems, including the creation of orphaned records.
[0121] An "orphaned record" refers to a record (or row) in a table that has lost its relationship with related records in other tables, typically due to the absence of a corresponding parent record. An "orphaned record" can also refer to a file or queue record in blob storage that is missing a referencing database records. This situation often arises when a record that is referenced by other records is deleted or modified in a way that breaks referential integrity.
[0122] Specifically, orphaned records occur in a database when there is an error in a database transaction. For example, if a database transaction fails to complete due to an error, some changes may have already been made to the database before the failure occurred. These partial changes can result in orphaned records if they are not properly rolled back. Orphaned records may also occur when a transaction violates referential integrity constraints, such as foreign key constraints. For example, if a transaction attempts to delete a record that is referenced by other records in the database and the deletion operation fails, the referenced records may become orphaned. Other examples of errors that may cause orphaned records include application errors, concurrency issues, cascading deletion, and/or manual operations. Such orphaned records are undesirable because they can lead to data inconsistency and integrity issues within the database. They represent data that is not properly associated with other data in the database, which can cause confusion and errors in application logic.
[0123] If an error does lead to an orphaned record, it will need to be handle, typically via a combination of deletion, reconciliation, error prevention, and automation in order to maintain data consistency within the database or storage resource.
[0124] Because of these risks, and ensure atomicity of multi-resource updates, it is generally accepted that transactional mechanisms be implemented that wrap all related operations within a single transaction. Furthermore, actions in a transaction are typically processed sequentially, with each action only proceeding once the previous one is successfully completed. This is because processing actions sequentially within a transaction helps ensure data integrity, maintain atomicity, provide isolation between concurrent transactions, simplify error handling, and maintain predictability in the system's behaviour. These factors are considered important for building robust and reliable systems, particularly in contexts where data consistency and reliability are important. Therefore, handler objects typically manage transactions sequentially to ensure the integrity and reliability of database operations.
[0125] When actions in a transaction are processed sequentially, tasks are typically executed synchronously. Synchronous execution is blocking in nature, meaning that the execution of a task blocks the execution thread until the task completes. During this time, the thread is occupied with the task, and other tasks cannot proceed until the current task finishes. This can lead to potential performance bottlenecks, especially in scenarios with long-running tasks or high concurrency. It is typically considered that asynchronous execution introduces complexity in managing concurrency, handling race conditions, and ensuring data consistency.
[0126] However, this type of approach (i.e., having multiple handlers that facilitate sequential and synchronous transactions) is not suitable for the systems described herein because of performance implications and delays in end user experience, the requirement to maintain data integrity between multiple disparate resources, and the requirement to scale up to any number of disparate cloud native resources. Accordingly, the inventors have invented a novel "parallel transaction" method that supports asynchronous actions, while still allowing for rollback including avoiding orphaned data records. In this context "rollback" refers to the reversal of all changes made by the transaction up to that point.
[0127] Ina cloud-native environment, handler objects typically manage database transactions and interact with cloud-native resources by leveraging the capabilities provided by cloud service providers and associated APIs. When database transactions occur across non-standard resources, such as cloud storage accounts or other external services, it introduces complexity and challenges beyond what is typically encountered when transactions are confined to a single database. This results in particular challenges relating to data consistency and integrity across all resources involved. For example, transactions across non-standard resources may have performance implications due to network latency, API call overhead, and resource contention. Furthermore, cloud storage services like Amazon S3 or Azure Blob Storage typically offer eventual consistency models, where data changes may not be immediately visible across all replicas. This can lead to inconsistencies when transactions involve both databases and cloud storage accounts.
[0128] Handling transactions across cloud-native architecture, especially involving non-standard resources like cloud storage accounts, introduces several challenges compared to traditional database transactions, particularly relating to consistency, concurrency, error handling, security, and monitoring. Cloud-native resources such as cloud storage accounts may not natively support traditional transactional guarantees like ACID properties (Atomicity, Consistency, Isolation, Durability). Ensuring data consistency and integrity across multiple resources becomes challenging when these resources have different consistency models and transactional semantics.
[0129] Accordingly, at the priority date, there existed no technology able to handle transactions in these scenarios.
[0130] The inventors have developed a solution that is able to handle multiple tasks, actions, and/or transactions at the same time to ensure data integrity between multiple disparate resources (such as databases as well as queues and file storage) while also achieving performance requirements with respect to speed, end user experience, and scalability.
[0131] Because the handler is able to execute several actions asynchronously, the performance scales up automatically based on the available hardware. The handler supports flexibility in not restricting the types of actions to be executed.
[0132] The actions to the handler executes can be reusable functions which are defined separately, and may only encapsulate logic specific to that action and resource. As each reusable action is then responsible for a specific task, it is simpler to isolate issues and verify that each component behaves as expected.
[0133] The asynchronous execution of actions within the handler adheres to the principles of concurrency and performance.
[0134] This solution is provided by a "Local Transaction" object that enables a method of ensuring data integrity when processing multiple simultaneous transactions using cloud-native resources. The method, implemented by the "local transaction" object, includes the steps of receiving a transaction request (for example when a record needs to be committed to the database), and then executing the transaction across at least one non-database cloud-based resource (for example cloud-based storage). Transactions are typically considered to be database transactions, however this method provides support for non-standard resources as well (i.e., notjust the typical database, but also the cloud storage accounts and/or queues which are not normally associated with handling transactions). In this method, executing the transaction includes executing tasks in parallel and/or asynchronously to increase the speed of the transaction and as a result the user experience.
[0135] If, during the execution, an error occurs, then the LT object causes a transaction rollback. The Local Transaction Object has actions (e.g., database changes, save to file storage requests, save to queue requests) added to it so that it contains a collection of actions to transact. On command, the Local Transaction Object will attempt to complete all of the actions that it contains all together as though they were all in a single transaction. Either all actions are completed and successful so that all changes involved remain in the system; records and updates are stored in the database, files are stored in their locations, queue items are stored awaiting processing (thus maintaining the data consistency and integrity across disparate resources (database, file storage and queue). Otherwise, if completing any actions contained within the Local Transaction Object fails, then all the actions are rolled back (or error logged) so that all actions are abandoned, therefore leaving the system in its prior state. All actions are abandoned without the records and updates in the database and without the files stored (or stored but logged as orphans) and without the queue items logged. This maintains the data consistency and integrity across disparate resources (database, file storage and queue).
[0136] In this context, "actions" refer to individual operations or tasks that the handler object needs to perform. These actions could include database changes, saving data to file storage, sending messages to a queue, or any other task required by the application logic. Each action represents a discrete unit of work that the handler object is responsible for executing. "Transactions", on the other hand, represent a higher-level concept involving a collection of related operations that are treated as a single atomic unit. Transactions typically involve multiple actions or database operations that need to be executed together in a coordinated manner to maintain data consistency and integrity. Transactions ensure that either all operations within the transaction are completed successfully, or none of them are, following the ACID properties.
[0137] In some embodiments, the actions added to the handler object maybe grouped together and executed as part of a single database transaction. For example, for the systems described herein, the actions added to the handler may include updating the order status in the database, saving order details to file storage, and sending a notification email to the customer. These actions could be grouped together and executed within a database transaction to ensure that all changes are applied atomically.
[0138] However, not all actions added to a handler object necessarily need to be part of a database transaction. Some actions may be independent of each other or may involve interactions with external services that do not support transactional semantics. In such cases, the handler object may still coordinate the execution of these actions effectively, even without the transactional guarantees provided by database transactions.
[0139] Example 1: In some embodiments a development environment for the systems and methods described herein is provided that includes a database (e.g., SQL Server on Linux, connecting via Entity Framework Core v6), file storage (e.g., Azure Blob Storage), and a queue (e.g., Azure Queue).
[0140] In the systems described herein, updates maybe committed to multiple database(s), file storage(s), and/or queue(s). A single action may also include other actions that are committed together at the same time. Non-database actions are started in a separate thread under the assumption that the rest of the commit will succeed, but if there is some problem (for example if the commit is unable to complete), then the system must be able to revert these non-database actions. After all the database commits have been performed but not yet committed to the database (as part of a transaction), validation logic is run. The validation process may result in a positive or negative outcome. In the event of a negative outcome, the transaction may be cancelled. These three actions (committing updates to multiple databases etc., starting non-database actions in a separate thread, and running validation logic) are not straight forward to implement. Specifically, some databases (e.g., EF Core v6) may not support shared transactions across databases running under an operating system like Linux. Also, Azure Queue and file storage actions are called via network API calls. This means that the action itself may pass or fail validation, and in addition the network call itself may succeed or fail independent of the action validation.
[0141] Because native support for these abovementioned three actions is not provided within the software environment (i.e., existing tools do not provide a complete solution that includes appropriate support), any solution would be akin to a "best effort" solution and not necessarily ideal. Post-save tasks could fail after the database commits have been applied; in such instances the validation module is configured to report the fail to the server so that appropriate action can be taken (for example the issue may be manually resolved). The rest of the system is set up to minimise the impact if this happens. As an example, if a local transaction including a file upload process were to fail and not be reverted properly, the existence of the orphan file on the server does not have any impact other than using additional storage until an automated cleanup process is run to remove the orphan files.
[0142] To address these problems, the inventors have created a solution based on a novel "Local Transaction" (LT) object. The LT class object is a handler. In object oriented programming (OOP), a handler object typically refers to an object that is responsible for handling specific tasks or events within a system. The term "handler" implies that this object is designed to manage and respond to certain actions or conditions, providing a centralised point of control for a particular aspect of the program's functionality. Handler objects are commonly used in event-driven programming, where events (such as user interactions, system notifications, or other occurrences) trigger specific actions. The handler object encapsulates the logic and behaviour associated with handling those events.
[0143] The LT class object is defined to contain:
a. One or more database (DB) contexts (being instances that represent a session with the database and can be used to query and save data and include cached copies of previously retrieved and newly created data).
b. The number of times the object has been "opened".
c. A progress indicator of a completion step.
d. A list of tasks to be validated just prior to committing the transaction.
e. A list of tasks which must be finished prior to the DB commits starting.
f. A list of tasks which must be run after the DB changes have been committed.
g. A list of tasks which must be called in the event of an action being cancelled.
[0144] Each DB context contains a reference to the current "Local Transaction" (if any). A "Local Transaction" (LT) means a transaction occurring within the system's environment of database(s), file storage(s), and queue(s), for example a database transaction being a unit of work that causes a change in the database.
[0145] The "Local Transaction" object has the following available actions:
a. Open Local Transaction: If the DB context(s) are not already associated with a LT object, then t a new LT object is created and is associated with the DB context, and increments the open count.
b. Cancel Local Transaction: The LT object waits for any save tasks to finish (if any), runs the cancel tasks (if any), reverts all outstanding DB changes and checkpoints, and removes the LT association from the DB contexts.
c. Complete Local Transaction: The LT object decrements the open count. If the open count is 0, the LT object waits for any save tasks to finish (if any), saves DB context changes (if any), validates any pre-commit tasks (if any), runs any post-save tasks (if any), reverts all DB checkpoints, and removes the LT association from the DB contexts. If any errors occur, the LT object cancels the LT.
d. Add Pre Commit Task: If the LT is already in the process of being closed then this task will return an error. Otherwise the LT object adds a new unstarted task to the appropriate queue.
e. Add Save Task: If the LT is in the process of being closed then this task will return an error. Otherwise the LT object adds the active task to the appropriate queue. The task may or may not have been asynchronously started yet, depending on available resources.
f. Add Cancel Task: If the LT is in the process of being closed then this task will return an error. Otherwise the LT object adds the unstarted task to the appropriate queue.
g. Add Post Save Task: If the LT is in the process of being closed then this task will return an error. Otherwise the LT object will add the unstarted task to the appropriate queue.
[0146] Every time a "Local Transaction" is opened, it must have a corresponding complete action, otherwise no DB changes will be made, and errors and exceptions must be handled by cancelling the LT before returning to the calling function.
[0147] Example 2: When electronic files are saved to storage (e.g., to an Azure Blob Storage) this is done along with a matching DB record containing the location and metadata about those files and what they are related to. The function 1300 (i.e., the software process) to perform this can be summarised as illustrated in the flow diagram in Figure 13.
[0148] At 1302, open a local transaction (LT) and at 1304 validate the file metadata. If there are any errors then cancel the local transaction (LT) at 1316 and return an error at 1306. At 1308 save a DB checkpoint, and at 1310 start a new asynchronous task to save the file to storage (e.g., Azure), and add the new task to the LT. At 1312 create a new asynchronous task definition to delete the file from storage, and add this new task to the LT. At 1314 complete the local transaction (LT). If at any stage there is an exception, then the local transaction (LT) is cancelled (1316).
[0149] If this novel LT function created the LT when the LT was opened, then assuming no errors, the DB will be updated and the file saved to storage after the process shown in Figure 13 has completed.
[0150] If the LT was already open when the function was called, the LT will remain open after the complete action 1314 is called. The DB will still be unchanged, while the file may or may not be saved into storage, depending on the progress of that task.
[0151] Code examples to implement the function are shown in Figures 12A-12C.
[0152] Figure 14 is a flowchart of the "open local transaction" 1302 step, including checking whether the LT is already open; if yes, the open count is incremented and if no, a new LT is created and associated with a DB context.
[0153] Figure 15 is a flowchart of the "add task" steps, including adding a pre commit task, a save task, a cancel task, and a post-save task. Each of these tasks check whether the LT is closing (when an error is returned), and if not then the task is added to the relevant queue.
[0154] Figure 16 is a flowchart of the "cancel local transaction" 1316 step. Once tasks have finished, the queue is cleared, queued tasks are started and executed asynchronously, once finished the queue is cleared, open count reset, DB and LT association removed, unsaved DB changes are reverted and the local DB checkpoint reset.
[0155] Figure 17 is a flowchart of the "complete local transaction" 1314 step. In this example, if any unhandled exception occurs in the steps shown in the blocked area, then the flow will move to the "cancel LT" step and return an error.
[0156] Figure 18 is a flowchart of a "save context changes" step.
[0157] Figure 19 is a flowchart of a file example illustrating how tasks are executed asynchronously and storage of files is managed. In this example, the steps in the blocked area describe the "cancel LT" step.
[0158] Two other challenges that occur in the systems described herein, include starting to add potential DB changes as part of the single commit without starting a database transaction, and with the ability to cancel those changes before starting. In addition, the ability to easily query data in the active DB contexts as if a transaction had been started on the DB would be useful.
[0159] Existing systems include a number of restrictions that are relevant to finding solutions to these two problems. For example, when performing a query against an EF core DB context table, by default it will check the conditions against the values currently in the DB and any objects returned are then added to the local tracking. Also, if the same database entity would be returned but the object is already being locally tracked, then it will return the existing locally tracked object.
[0160] The implication of the above is that queries can return unexpected results, for example as show in Table B.
Action Returned Object Local Tracking Database Obj A = ID: A =Name: ABC Query for ID=A Obj A ObjA Obj A = ID: A = ID: A = ID: A = Name: ABC = Name: ABC = Name: ABC Update A ObjA ObjA =name changed to XYZ = ID: A = ID: A = Name: XYZ = Name: ABC Query for ID=A Obj A Obj A Obj A = returns the value from = ID: A = ID: A = ID: A the local tracking = Name: XYZ = Name: XYZ = Name: ABC Query for Name=XYZ Obj A Obj A returns nothing = ID: A =ID:A = Name: XYZ = Name: ABC Query for Name=ABC Obj A Obj A Obj A = returns the value from = ID: A = ID: A = ID: A the local tracking Name: XYZ =Name: XYZ =Name: ABC Save changes ObjA ObjA = ID: A = ID: A = Name: XYZ = Name: XYZ Query for Name=XYZ Obj A Obj A Obj A = returns the value from = ID: A = ID: A = ID: A the local tracking = Name: XYZ = Name: XYZ = Name: XYZ Table B Example Query Results
[0161] In this example, the queries after making the change may return unexpected results. Using transactions would allow for changes to be saved throughout the code to then allow subsequent queries to return the expected values, however it would then lock down the tables in the database for longer than is desired. This has additional issues when running queries searching on more than one table, depending on whether any or all of the related objects have been loaded into the local context or not.
[0162] The inventors have determined that these problems can be addressed by ensuring that the user code does not directly access the DB contexts unless the service call has a specific need to and/or if the service calls would not be impacted by these issues. Accordingly, the inventors have created a method whereby a "Data Access Service" is defined to be used by user code. For each of the database objects which need to be accessed within the "local transaction" various methods for retrieving the data are defined as follows:
a. Get Object for <keyid>
b. Get Objects for <list ofkeyid>
c. Query if Object(s) exist for given query
d. Query for Object(s) matching given query
e. Query for Object(s) matching given query and <list of keyid>
f. Get Child Objects for parent <keyid>
[0163] These methods utilise the local context for obtaining objects which have already been retrieved from the database and not yet saved, as well as all objects which have been deleted locally but not yet saved to the database.
[0164] Using the local context as above alongside querying the database directly, the data access service is able to query the data within the scope of the "local transaction" to return the expected results.
[0165] In addition, the novel method defines an "Object Service" to be used by user code for each object. This entails:
a. Defining methods for reading data using the "Data Access Service" calls and returning object models.
b. For object models which only include data from a single objects, no additional processing is needed.
c. For object models which include data from child/parent objects, additional processing is needed to retrieve those details (again using the appropriate "Data Access Service" calls for this child/parent objects) and to update the object model before returning the combined results.
[0166] Example 3: In one example embodiment, there are two objects: "Scope" and "Scope Item". "Scope" contains a field with the time zone the scope is performed under. For the "Scope Item" model, the time zone is included from the parent scope. For a method to retrieve the list of "Scope Item" models for a given scope the process 2000 followed may be understood with reference to the flow diagram illustrated in Figure 20 of the drawings. At 2002 it is determined whether a scope ID was provided. If no, at 2006 return an empty list. If yes, at 2004 call data access service to retrieve scope item models for the given scope ID. If this query is run fully on the local context or fully on the database, then the model will be fully retrieved. At 2008 determine whether any scope items were returned. If no, return empty list 2006. Alternatively, at 2010, determine if any of the returned scope item models are incomplete. If no, return a final list at 2014. If yes, at 2012 get the scope IDs for the incomplete scope item models, retrieve the missing scope details for those scope IDs, and then update the incomplete scope item models with those additional details before returning the final list of scope item models at 2014.
[0167] Figure 21 is a flowchart illustrating the steps involved in the "Get Object for Key" process, in which the local context is checked for an object with the keyed (key identifier), the object for the keyid is retrieved, and if the object is not deleted the database gets the object for the keyid before returning the object to the data access service.
[0168] Figure 22 is a flowchart for the "Get Objects for List of Keys" process in which the local context is checked for all objects matching the list of keyid.
[0169] Figure 23 is a flowchart for the "Query if Object Exists for Given Query" process.
[0170] Figure 24 is a flowchart for the "Query for Objects matching Given Query" process.
[0171] Figure 25 is a flowchart for the "Query for Objects matching Given Query and List of Keys" process.
[0172] Figure 26 is a flowchart for the "Get Child Objects for Parent Key" process.
[0173] Figure 27 is a flowchart for the "Get Object Models matching List of Keys" process.
It will be understood to persons skilled in the art of the invention that many modifications may be made without departing from the spirit and scope of the invention.
Claims (7)
1. A method of ensuring data integrity when processing multiple simultaneous transactions using cloud-native resources, the method implemented by a handler object and comprising:
receiving a plurality of action requests;
executing, as a result of the requests, a corresponding plurality of actions; and
if, during the executing, an error occurs in at least one action, then cancelling all actions being executed concurrently when the error occurs.
2. The method of claim 1, wherein at least one action is associated with at least one non-database cloud-based resource.
3. The method of claim 2, wherein the at least one non-database cloud-based resource includes one or more of: a file storage and a queue.
4. The method of any one of the preceding claims, wherein at least one action is associated with a cloud-based database.
5. The method of any one of the preceding claims, wherein executing the plurality of actions comprises executing the actions in parallel and asynchronously.
6. A workflow management system comprising:
one or more workflow management clients configured to provide a user interface for inputting data to the system and displaying data provided by the system; a workflow management server in communication with the workflow management clients; and at least one database configured to store workflow management records, wherein the workflow management server is configured to use the method of any one of claims 1 to 5 in order to: receive workflow data from the one or more workflow management clients; access the at least one database to store and retrieve the workflow data; and process the workflow data in order to output to a user, via the one or more clients, workflow notifications.
7. The system of claim 6, wherein at least one workflow management client is configured to receive a user input that adds, changes, and/or removes workflow data from the workflow management records stored on the at least one database.
104 100 104
146 Information Progress/Tracking Workflow 146
140 Message SMS 140
Workflow Charts and Statistics Dashboard Data Updates User Subscriber Email
Reports
Entry Exports Data 148 Subscription 148 an of failure or success (on prompts User 118
Administration Notifications System 118
140
action) 140
Data Entry Correspondence User Progress/Tracking Workflow Progress/ Workflow Workflow Information Information Tracking Updates
106 or success (on prompts User 106
102 Data Entry action) an of failure 102
102 102 Application Web 110 110
Mobile App
108 108
112 112 1/27
142 142 130
130
144 144
Azure Cloud Function Communication 120 120 132
Function Processes Automated 132
122 Subscription 122
Database
126 Data Subscription Storage File 126 Functions System 134
134 Database
Data Meta 124 124 Figure 1
Figure 1
202 access) public (no Account Storage 200 206
214 200 214 206
TO
or 218 Application Web Networkfor Delivery Content web Static 218
Files Queue age Mes: Failure Queue Message Integration 210 210
220 220
106 106
212 212
216 216 230
230
104 104 Network Virtual 232 2/27
232 Subnet
242 242 Endpoint Private Security Subnet 234
204 204 234 Endpoint Private Security P on based Restrict Management API Web API Endpoint Private Security KeyMault Security Subnet 130 130 Subnet
ation Author er Us integration app Function B2C Azure SQL
240 Database Relational 240 System Management mailer app Function 236
208 208 236 238
238 Figure 2
Figure 2
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2025220811A AU2025220811A1 (en) | 2023-02-10 | 2025-08-22 | Handler object for workflow management system |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2023900323 | 2023-02-10 | ||
| AU2023900323A AU2023900323A0 (en) | 2023-02-10 | Workflow management system |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| AU2025220811A Division AU2025220811A1 (en) | 2023-02-10 | 2025-08-22 | Handler object for workflow management system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| AU2024200855A1 true AU2024200855A1 (en) | 2024-08-29 |
Family
ID=90354740
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| AU2024200855A Abandoned AU2024200855A1 (en) | 2023-02-10 | 2024-02-09 | Handler object for workflow management system |
| AU2025220811A Pending AU2025220811A1 (en) | 2023-02-10 | 2025-08-22 | Handler object for workflow management system |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| AU2025220811A Pending AU2025220811A1 (en) | 2023-02-10 | 2025-08-22 | Handler object for workflow management system |
Country Status (2)
| Country | Link |
|---|---|
| AU (2) | AU2024200855A1 (en) |
| GB (1) | GB2628706A (en) |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP3901060B2 (en) * | 2002-08-28 | 2007-04-04 | 日本電気株式会社 | Application update processing method, update processing system, and update processing program |
| US7451432B2 (en) * | 2004-10-01 | 2008-11-11 | Microsoft Corporation | Transformation of componentized and extensible workflow to a declarative format |
| US10831510B2 (en) * | 2018-07-06 | 2020-11-10 | International Business Machines Corporation | Method to design and test workflows |
-
2024
- 2024-02-09 AU AU2024200855A patent/AU2024200855A1/en not_active Abandoned
- 2024-02-09 GB GB2401779.0A patent/GB2628706A/en active Pending
-
2025
- 2025-08-22 AU AU2025220811A patent/AU2025220811A1/en active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| GB2628706A (en) | 2024-10-02 |
| AU2025220811A1 (en) | 2025-09-11 |
| GB202401779D0 (en) | 2024-03-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12099819B2 (en) | Compositional entity modeling systems and methods | |
| US8762187B2 (en) | Easy process modeling platform | |
| US20030195789A1 (en) | Method for incorporating human-based activities in business process models | |
| CN110222106A (en) | Integrated workflow and db transaction | |
| CN104471595A (en) | Workflow management device and workflow management method | |
| US11295260B2 (en) | Multi-process workflow designer | |
| CN115841310A (en) | Construction method of plan flow model, event processing method and device | |
| CN108038665A (en) | Business Rule Management method, apparatus, equipment and computer-readable recording medium | |
| US8726286B2 (en) | Modeling and consuming business policy rules | |
| US8606762B2 (en) | Data quality administration framework | |
| JP7246301B2 (en) | Program development support system and program development support method | |
| JP2016181019A (en) | Order reception processing system and order reception processing method | |
| US8332851B2 (en) | Configuration and execution of mass data run objects | |
| US11699115B2 (en) | System and method for modular customization of intermediate business documentation generation | |
| AU2024200855A1 (en) | Handler object for workflow management system | |
| US12107738B2 (en) | User interfaces for cloud lifecycle management | |
| WO2020155167A1 (en) | Application of cross-organizational transactions to blockchain | |
| CN117873547A (en) | Data development management method, device, computer equipment and storage medium | |
| EP4100844B1 (en) | Handling faulted database transaction records | |
| US11288611B2 (en) | Multi-process workflow designer user interface | |
| EP2601627B1 (en) | Transaction processing system and method | |
| JP2010257327A (en) | Project management support apparatus, project management support method, and project management support program | |
| Zuckmantel et al. | Event-based data-centric semantics for consistent data management in microservices | |
| JP6662153B2 (en) | Program creation system | |
| US20250272668A1 (en) | Transfer groups |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| DA3 | Amendments made section 104 |
Free format text: THE NATURE OF THE AMENDMENT IS: TO AMEND THE APPLICANT NAME TO SPYDERTECH CONSULTING PTY LTD |
|
| MK5 | Application lapsed section 142(2)(e) - patent request and compl. specification not accepted |