You will be redirected in 30 seconds or close now.

ColdFusion Authors: Yakov Fain, Jeremy Geelan, Maureen O'Gara, Nancy Y. Nee, Tad Anderson

Related Topics: ColdFusion

ColdFusion: Article

Macromedia's Data File Access API Architecture Unleashed

Macromedia's Data File Access API Architecture Unleashed

Like its predecessors, Macromedia's most recent installment of the Devnet Resource Kit (DRK 3) is stocked with many excellent utilities for Flash developers. Unlike previous releases, DRK 3 aims to make the lives of ColdFusion developers easier by including many applications and development tools for use in CFMX applications. One of these is an Application Programming Interface I developed called the Data File Access API (DFA API).

When I set out to design and develop the DFA API, the goal was to develop an API that would allow developers to store data in text files as either XML or CSV text, and to access and manipulate that data as easily as if it were stored in a database. With the goal of that core functionality in mind, the primary objective was to invent an architecture that would perform as optimally as possible.

In addition, two other primary objectives were to make the API very easy to use and to make it flexible enough for developers to extend or implement in any way they might need. Ideally, as developers become more familiar with the API, they will be inspired to use it as the backbone of more creative solutions to meet their applications' needs. Let's examine how the features of ColdFusion MX were used to meet these objectives.

The first thing I needed to consider was how to architect the API not only to define and store data, but also to make this data available for very fast filtering and retrieval. What I decided was to create a ColdFusion Component that houses all of the methods for working with the data and that stores all of the data in memory (as needed) in a proprietary XML DOM format, whether the data came from CSV or XML text. I refer to these as "data tables" and think of them as being analogous to database tables cached in memory.

The API needs to be flexible enough to allow data to be retrieved and filtered using XPath or SQL, so component methods exist to determine whether the query passed is XPath or SQL. XPath is applied directly to the XML DOM in memory and in order to use SQL, the XML DOM is first converted to a ColdFusion query object and then queried using Query of Queries. Any call to the API to extract data can retrieve that data as XML or as a ColdFusion query.

Another challenge in developing the API was how to physically define and store the data used in applications. I broke the data table definition task between three XML files: one that defines table definitions (column names, data types, default values, etc.); one that maps the definitions to the actual storage locations (as relative or absolute path or URL) so that the API would know where to find the data; and one XML file that contains the data itself. I chose this architecture so that developers can easily write validation routines (the data type and required properties of data table columns aren't actually used), share data table definitions between applications, etc.

In addition to the ability to retrieve data mentioned above, methods were also written to parse SQL statements that perform INSERT, UPDATE, or DELETE operations on data tables. Unlike when a SELECT statement is passed to the API, converting the XML data table to a ColdFusion query object will do no good for INSERT, UPDATE, and DELETE commands, as ColdFusion does not support these SQL statements in a Query of Queries.

Instead, the SQL is broken into its various components and then executed against the appropriate XML nodes directly. In the case of DELETE commands, working with the data directly as XML proved more efficient than converting the XML to a query object, retrieving the data not being deleted, and converting the new query object back to XML.

In addition to working with XML, the API needed to support CSV so a method was added to parse CSV text and convert it to an XML table in memory. The first row of values in the CSV content is used to create a data table definition, and all other rows populate that definition. Other methods were also added for validation of various entities being used, to make debugging easier, etc. Two other major concerns while developing the API were how to create an easy way for all developers to use the API, and how to handle concurrency issues.

In order to deal with data table memory and physical file concurrency issues, all data retrieval is performed within "read-only" named locks. When row(s) of data are inserted, updated, or deleted from a data table, the data table in memory is first manipulated within an "exclusive" named lock. Afterwards, the entire data table is written to file as an XML string, also within an "exclusive" named lock. This approach minimizes locking on the server, and prevents developers from having to lock API access in their applications, because the API is handling all of the locking - local to the code blocks that require locks.

To make the API easier for developers to use, I wrote a custom tag "wrapper" for the API CFC. The idea behind the custom tag was to give users the ability to use syntax similar to what they are already used to with the <cfquery> tag in order to query the DFA API data for their application. Like <cfquery>, when retrieving data a "name" attribute is passed to assign a name to the result set returned (may be in query or XML format). A "returntype" attribute is used to specify whether to return the results of a query as XML or a ColdFusion query object.

Also similar to <cfquery>, the SQL to SELECT data is passed as the contents of the tag. An XPath attribute is used to select data using an XPath query. Rather than passing a "datasource" name, the tag accepts a "datatable" attribute in order to determine what data table to apply XPath queries to (SQL queries simply name the data table in the SQL). There is also an "XSLT" attribute for performing XSL Transformations on the data (the attribute value is either the location or contents of an XSL stylesheet) and a "CSV" attribute for passing the location of a CSV file (or CSV content) to be parsed into a data table.

Within the tag body a SQL INSERT, UPDATE, SELECT, or DELETE command may be passed, as well as a DROP command to remove a data table from memory, and a SAVE command for committing a data table already in memory to file. The tag itself creates a DFA API instance in the application scope if one doesn't already exist (in start mode), and performs all of its "work" in end mode. The only thing required to use the tag is the existence of three request scope variables that store the locations of the DFA API component, the location of the XML file that defines data table "columns," and the location of the XML file that "maps" these definitions to physical files.

Though the API was never intended for use in large enterprise-level applications, early tests have yielded surprising performance results. The API definitely does perform very well...exactly how much data or how many concurrent users is too many is something you'll have to test for yourself. I wouldn't be surprised to find that even large-scale solutions can be delivered, driven by the API rather than by a traditional database. Even if you decide to stick with the more traditional methods of data storage, I highly recommend looking to the DFA API to serve as an example of how to best architect an API and as an example of how ColdFusion Components, Custom Tags, Query of Queries, and XML Parsing functionality in ColdFusion MX can be combined to achieve amazing results in your applications.

More Stories By Simon Horwith

Simon Horwith is the CIO at AboutWeb, LLC, a Washington, DC based company specializing in staff augmentation, consulting, and training. Simon is a Macromedia Certified Master Instructor and is a member of Team Macromedia. He has been using ColdFusion since version 1.5 and specializes in ColdFusion application architecture, including architecting applications that integrate with Java, Flash, Flex, and a myriad of other technologies. In addition to presenting at CFUGs and conferences around the world, he has also been a contributing author of several books and technical papers.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
SYS-CON Events announced today that IoT Global Network has been named “Media Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. The IoT Global Network is a platform where you can connect with industry experts and network across the IoT community to build the successful IoT business of the future.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Disruption, Innovation, Artificial Intelligence and Machine Learning, Leadership and Management hear these words all day every day... lofty goals but how do we make it real? Add to that, that simply put, people don't like change. But what if we could implement and utilize these enterprise tools in a fast and "Non-Disruptive" way, enabling us to glean insights about our business, identify and reduce exposure, risk and liability, and secure business continuity?
DXWorldEXPO LLC announced today that Telecom Reseller has been named "Media Sponsor" of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
In this Women in Technology Power Panel at 15th Cloud Expo, moderated by Anne Plese, Senior Consultant, Cloud Product Marketing at Verizon Enterprise, Esmeralda Swartz, CMO at MetraTech; Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems; Seema Jethani, Director of Product Management at Basho Technologies; Victoria Livschitz, CEO of Qubell Inc.; Anne Hungate, Senior Director of Software Quality at DIRECTV, discussed what path they took to find their spot within the tec...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...