Welcome!

ColdFusion Authors: Yakov Fain, Maureen O'Gara, Nancy Y. Nee, Tad Anderson, Daniel Kaar

Related Topics: ColdFusion

ColdFusion: Article

Data Classification Using a Digital Taxonomy

As the volume of digital content grows, so does the need for efficient and accurate data retrieval

As an organization's vast collection of data continues to grow, it becomes increasingly difficult for users to find the information they need. You need only to look at the success of Google to see the importance of search engine technology. Unfortunately, traditional search engines that rely primarily on keyword matching often return unintended results.

This makes finding the information that you're really after time consuming and inefficient. To make data search and discovery more productive, organizations are turning to taxonomy-based data classification.

Taxonomy classification is a means of creating order out of large collections of data. At its most basic level, taxonomy is simply a collection of terms or subjects. The strength of the model, however, comes from the taxonomy's ability to also define a term's relationship to other terms. This provides the means to derive the terms' context based on the relationships in the taxonomy. If a single term has several different meanings, it will have these additional associations defined in the taxonomy.

In most implementations the model is flexible, allowing for relationships to be expressed in much greater detail than is available from a strict hierarchical model. This allows for the definition of "related" and "equivalent" terms; something that is more difficult in typical hierarchical trees. Among other benefits, this makes it possible to implement an Amazon-like "recommendation" engine to find related items that are defined in similar topic areas within the same taxonomy.

When a taxonomy classification is being used, data that is added to a system is classified using the terms that have already been defined in the taxonomy. When data is associated with one or more terms, the data inherits the properties and relationships of those terms. This reduces the work involved with classifying new data. Also, as the taxonomy definition is improved and updated, the new data associations will be effective for the existing data without the need to go back and manually reclassify it.

Finding Your Data: No Problem!
Removing keyword ambiguity should be a goal of all search implementations. With conventional search engines, a keyword search for the term "star" could return results both on astronomy as well as Hollywood actors. With a taxonomy-based search, the multiple contexts would be known and presented to the user allowing for further refinement of search results based only on the desired subject matter, or "facets." Irrelevant data is filtered out, leaving behind only the results that are applicable to the selected topic area.

The same system that removes ambiguity also allows for the benefit of data discovery. Looking for data using a taxonomy navigation tool is similar to browsing the book aisles of a library. You may not know what you're looking for, but you'll know it when you see it. Users are able to "browse" the data that is associated with nearby terms in the taxonomy, allowing them to find information they might not have discovered in a search using known keywords.

Given that the same term could be relevant to many different subject areas, there are potentially many paths to the same data, allowing for expansive data discovery. Imagine searching for a brand of merlot red wine and then being presented a selection of foods that go best with that variety. That is the power of a taxonomy classification based search!

Enter ColdFusion and XML
XML is the emerging standard for defining taxonomies. Many of the currently available tools for creating a taxonomy specification provide XML export functionality. This is good news for ColdFusion developers, who already have a collection of functions available for working with XML.

In an XML definition, each term in the taxonomy is an element with its own collection of attributes and subelements. A standard definition will include tag markup for each type of relationship that can be represented. For most terms this will include "narrower term" and "broader term" tags, indicating the term's hierarchical position in a given context. In more advanced systems, XML elements would also be added to represent the nonhierarchical relationships. A sample XML specification is shown in Listing 1.

Although XML is widely used as the language for taxonomy definition, an authoritative-format standard for this definition is still pending. Given this, it is best to make the implementation as flexible as possible, allowing for future attributes and term relationship types to be added easily with little to no refactoring. Ideally, an accepted Document Type Definition (DTD) will be created that allows for the validation of the XML. Until then, it is possible to implement custom validation that uses XMLSearch() with an XPath expression to validate the required XML elements. Organizations may also want to consider creating their own DTD to be used for the validation.

Given a plain XML definition, it becomes trivial to use ColdFusion's XMLParse() to load the definition and create the XML object in memory. Once the XML object is obtained, an XPath expression can be used with ColdFusion's XMLSearch() to extract the relationships. Depending on the criteria specified in the XPath expression, it is possible to process a single specified term or the entire taxonomy at once. (See CFDJ, Vol. 4, issue 4 for an excellent article on parsing XML.)

Can ColdFusion Handle It?
Organizations that are most likely to implement a taxonomy classification system are those with high volumes of digital data. Given the large amount of data, it becomes important to keep performance in mind when designing the system. The two main factors that effect scalability are the number of terms in the taxonomy and the amount of data associated with those terms.

A taxonomy with 20,000 or more terms is generally considered large. Parsing the XML and storing the terms into memory are potentially intense processes. With ColdFusion, however, tests using a 100,000 term taxonomy on a mid-powered server resulted in load times of only a few seconds. This included the XMLParse() call to create the XML object, the XMLSearch() call to retrieve the terms, and multiple assignment calls to create an associative array of the terms along with the defined relationships. An additional step to perform the validation added only a marginal increase to the total processing time.

Even though a taxonomy will typically be limited to several thousand terms, there may be millions of data records associated with those terms. Once finalized, the size of the taxonomy definition tends to stay more or less fixed, unlike the data count of an active system, which will see continued growth. Even though reading the data associations from memory is fast, the memory consumption could become excessive as the system ages. It is usually safe to load the entire taxonomy into memory, because even a large classification will make only a modest dent in memory consumption. This is not true with the actual system data in which the typical design tradeoff between speed and memory must be considered.

To avoid scalability problems, you can rely on a simple CFQUERY database call to retrieve the associations given to a specific term. For further improvements, commonly referenced terms and their respective data associations can be cached for fast lookup. See Figure 1 for a basic process flow starting with the initial XML import, and concluding with the user obtaining results based on the specified criteria.

Future Development
The use of a taxonomy classification system for digital data is still relatively new. Over the past five years there has been much progress. However, work is still needed before there is a widely accepted vocabulary and common understanding of the framework and concepts.

One of the biggest challenges for an organization that wants to implement a taxonomy classification is the time and effort involved in creating the definition specification. Currently there are some commercially available definitions, but these are offered only in a limited number of business areas. Organizations that already have an institutional thesaurus are well positioned to use taxonomy-based classification. A thesaurus is often a precursor to a taxonomy, and the terms and vocabulary used to create a thesaurus are easily transferable. The National Information Standards Organization (NISO) has published guidelines for the construction of a Monolingual Thesauri, which is available in the ANSI Z39 specification. This is a good starting point for those exploring the possibility of implementing such a system.

Another implementation challenge is ensuring that data is classified correctly. There are auto-classification tools available that attempt to derive data context by using natural-language algorithms. These tools attempt to "understand" the content of the given data by evaluating not just the keywords, but also the circumstance. Once attained, the tools will assign the data to the proper term in the taxonomy. The accuracy of these tools won't match human classification but could be acceptable especially if the data is already tagged using some form of metadata.

Gaining Steam...
The idea of using a taxonomy to organize and classify data is not new. In fact, the term "taxonomy" comes from biology in reference to the classification of living things. Applying this idea to vast stores of digital content, however, is a practice that has only recently gained steam. As digital content repositories grow, finding your target data quickly and accurately will seem more like finding the proverbial needle in a haystack. A taxonomy classification system is an excellent complement to a traditional keyword search and will help users efficiently find the data they need.

More Stories By David Athey

David Athey is a senior developer with PaperThin Inc. based in Quincy, MA. He is an advanced Certified ColdFusion Developer with expertise in Enterprise Content Management and Web-based publishing.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The Internet of Things (IoT) is going to require a new way of thinking and of developing software for speed, security and innovation. This requires IT leaders to balance business as usual while anticipating for the next market and technology trends. Cloud provides the right IT asset portfolio to help today’s IT leaders manage the old and prepare for the new. Today the cloud conversation is evolving from private and public to hybrid. This session will provide use cases and insights to reinforce the value of the network in helping organizations to maximize their company’s cloud experience.
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have spoken with, or attended presentations from, utilities in the United States, South America, Asia and Europe. This session will provide a look at the CREPE drivers for SmartGrids and the solution spaces used by SmartGrids today and planned for the near future. All organizations can learn from SmartGrid’s use of Predictive Maintenance, Demand Prediction, Cloud, Big Data and Customer-facing Dashboards...
IoT is still a vague buzzword for many people. In his session at Internet of @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, will discuss the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. The presentation will also discuss how IoT is perceived by investors and how venture capitalist access this space. Other topics to discuss are barriers to success, what is new, what is old, and what the future may hold.
Whether you're a startup or a 100 year old enterprise, the Internet of Things offers a variety of new capabilities for your business. IoT style solutions can help you get closer your customers, launch new product lines and take over an industry. Some companies are dipping their toes in, but many have already taken the plunge, all while dramatic new capabilities continue to emerge. In his session at Internet of @ThingsExpo, Reid Carlberg, Senior Director, Developer Evangelism at salesforce.com, to discuss real-world use cases, patterns and opportunities you can harness today.
All major researchers estimate there will be tens of billions devices – computers, smartphones, tablets, and sensors – connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be!
Noted IoT expert and researcher Joseph di Paolantonio (pictured below) has joined the @ThingsExpo faculty. Joseph, who describes himself as an “Independent Thinker” from DataArchon, will speak on the topic of “Smart Grids & Managing Big Utilities.” Over his career, Joseph di Paolantonio has worked in the energy, renewables, aerospace, telecommunications, and information technology industries. His expertise is in data analysis, system engineering, Bayesian statistics, data warehouses, business intelligence, data mining, predictive methods, and very large databases (VLDB). Prior to DataArchon, he served as a VP and Principal Analyst with Constellation Group. He is a member of the Boulder (Colo.) Brain Trust, an organization with a mission “to benefit the Business Intelligence and data management industry by providing pro bono exchange of information between vendors and independent analysts on new trends and technologies and to provide vendors with constructive feedback on their of...
Software AG helps organizations transform into Digital Enterprises, so they can differentiate from competitors and better engage customers, partners and employees. Using the Software AG Suite, companies can close the gap between business and IT to create digital systems of differentiation that drive front-line agility. We offer four on-ramps to the Digital Enterprise: alignment through collaborative process analysis; transformation through portfolio management; agility through process automation and integration; and visibility through intelligent business operations and big data.
There will be 50 billion Internet connected devices by 2020. Today, every manufacturer has a propriety protocol and an app. How do we securely integrate these "things" into our lives and businesses in a way that we can easily control and manage? Even better, how do we integrate these "things" so that they control and manage each other so our lives become more convenient or our businesses become more profitable and/or safe? We have heard that the best interface is no interface. In his session at Internet of @ThingsExpo, Chris Matthieu, Co-Founder & CTO at Octoblu, Inc., will discuss how these devices generate enough data to learn our behaviors and simplify/improve our lives. What if we could connect everything to everything? I'm not only talking about connecting things to things but also systems, cloud services, and people. Add in a little machine learning and artificial intelligence and now we have something interesting...
Last week, while in San Francisco, I used the Uber app and service four times. All four experiences were great, although one of the drivers stopped for 30 seconds and then left as I was walking up to the car. He must have realized I was a blogger. None the less, the next car was just a minute away and I suffered no pain. In this article, my colleague, Ved Sen, Global Head, Advisory Services Social, Mobile and Sensors at Cognizant shares his experiences and insights.
We are reaching the end of the beginning with WebRTC and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) irreversibly encoded. In his session at Internet of @ThingsExpo, Peter Dunkley, Technical Director at Acision, will look at how this identity problem can be solved and discuss ways to use existing web identities for real-time communication.
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. It also ensured scalability and better service for customers, including MUY! Companies, one of the country's largest franchise restaurant companies with 232 Pizza Hut locations. This is one example of WebRTC adoption today, but the potential is limitless when powered by IoT. Attendees will learn real-world benefits of WebRTC and explore future possibilities, as WebRTC and IoT intersect to improve customer service.
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at Internet of @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, will share some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder of TeleStax, an Open Source Cloud Communications company that helps the shift from legacy IN/SS7 telco networks to IP-based cloud comms. An early investor in multiple start-ups, he still finds time to code for his companies and contribute to open source projects.
The Internet of Things (IoT) promises to create new business models as significant as those that were inspired by the Internet and the smartphone 20 and 10 years ago. What business, social and practical implications will this phenomenon bring? That's the subject of "Monetizing the Internet of Things: Perspectives from the Front Lines," an e-book released today and available free of charge from Aria Systems, the leading innovator in recurring revenue management.
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges.
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at 6th Big Data Expo®, Hannah Smalltree, Director at Treasure Data, to discuss how IoT, Big Data and deployments are processing massive data volumes from wearables, utilities and other machines.
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at Internet of @ThingsExpo, Erik Lagerway, Co-founder of Hookflash, will walk through the shifting landscape of traditional telephone and voice services to the modern P2P RTC era of OTT cloud assisted services.
While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehension and conference efficiency.
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, will discuss single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example to explain some of these concepts including when to use different storage models.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace. These technological reforms have not only changed computers and smartphones, but are also changing the data processing model for all information devices. In particular, in the area known as M2M (Machine-To-Machine), there are great expectations that information with a new type of value can be produced using a variety of devices and sensors saving/sharing data via the network and through large-scale cloud-type data processing. This consortium believes that attaching a huge number of devic...