While we specialize in producing expert-written content for highly technical markets, perhaps the greatest service we provide our customers is developing topics for their blogs and longer-form content marketing assets. While it is certainly possible to create your own topic strategy, normally, part of our partnership with our customers involves helping them generate a topic for each piece of practitioner-written content we deliver to them.
We’ve written this series to help our customers and marketing managers look under the hood to discover how we develop topics. Examined here are data integration and ETL conversation, what is influencing the share of conversation, and the attributes of the practitioner that you need to cover the topics that fall under this conversation.
In computing, extract, transform, load (ETL) refers to a process in database usage for performing these actions on data from multiple sources, particularly with respect to data warehousing. Integration refers to connecting ETL to enterprise platforms that can then extract useful patterns and information to make the data human consumable so the enterprise can use the data.
March 2018 Data Integration and ETL
Our approach to determining topics within this conversation begins and ends with a share of voice (SoV) calculation, which ultimately gives us an idea of a vendor’s share of this conversation (SoC). Our share of voice methodology is described in some detail in a variety of places, but here is a quick summary:
Share of conversation (or conversation share of voice) is the percentage of any specific conversation you own. Conversation share of voice is more precise because it looks at specific conversations within a market versus focusing only on global SoV compared to competitors. While it’s interesting to know how your brand or product is doing in the world of all products, you can make the greatest impact by going local with specific topic areas.
The Re:each Share of Conversation Calculation for Data Integration and ETL
Fixate’s Re:each platform has algorithms which derive conversation share of voice across traditional and social media. The phases of calculation are data collection, normalization, and interpretation. We can’t give you the secret sauce, but we can give you an idea of how we do it.
Below, we will dig deeper into why the results were the way they were.
- Identify your place: Identify specific keywords and concepts associated with your brand and product based on those concepts that appear the most in all conversations you participate in.
- Determine your conversations: From there, the concepts are applied across a body of sources in order to identify the three conversations which are most relevant to you. For each vendor, there are three types of conversations identified:
- Demand Gen
- Mindshare/Thought Leadership
- Find your competition: Competition is derived by identifying the top 4-9 vendors in each conversation based on their SoV in those conversations.
- Determining relevant topics: Topic suggestions are derived from entity/concept extraction of content that was most prevalent in each conversation selected over the set period of time. Those concepts that had the greatest reach in that conversation are weighted and end up as the core elements of a suggestion.
Data is collected from traditional social media sources as well as trusted media sources for each broad market. Weight is put on content based on the source it came from using a proprietary algorithm. Currently, calculations are done at the end of each month for the entire month’s worth of data.
The machine learning used in SoV is human-supervised (Human-In-The-Loop). SoV calculations can be fully automated; however, topic suggestions are subject to language challenges, and domain expertise based on raw data collection. Domain experts validate SoV calculations, and reformulate raw entity extraction on top-performing content in each conversation to build coherent topic suggestions.
Results that Influenced Topic Selection for ETL Integration in March 2018
Adobe led the share of conversation, largely due to their announcement of partnerships including SnapLogic and TMMData. This also contributes to the bump in the TMMData and SnapLogic SoC. On their own, SnapLogic had great news to report on their enterprise integration tool performance. Datameer and Matillion managed to capture the rest of the SoC, with a partnership announcement with IBM and a funding round announcement, respectively.
- At Adobe Summit 2018 (http://summit.adobe.com/na/), the company announced it is expanding its 5,000+ global partners. Those partners include Data integration/ETL vendors SnapLogic and TMMData. So while Adobe itself is not actually an ETL player, their reach contributed to the conversation in March in a big way.
- SnapLogic’s enterprise integration tool is processing one trillion documents a month.
- Datameer announced its partnership with IBM in New Data Science and Machine Learning Platforms.
- TMMData announced deep integration with Adobe Cloud Platform.
- Matillion announced they raised $20 million from Sapphire Ventures and Scale Venture Partners.
Here are the influencers driving the conversations about data integration and ETL in March 2018 on social media networks:
Here are the top blogs/news sources for Big Data, ETL, and integration for the month of March in 2018:
ETL is not a new category of technology and practices. But because of the uptick in Big Data and our data-obsessed tech world, it has been fully revitalized as a must-have tool to obtain value and manage massive amounts of information. Those interested in ETL have a broad range of backgrounds. Unlike other technologies where use case and implementation do not make it to the business user, enterprise ETL does. That is why your primary practitioner persona should include both technical knowledge and business application. This would be the “enterprise architect” and “data engineer.” The lifespan of the ETL development and architecture skill set is long compared to other technologies, so there are no specifics in age range. However, when it comes to ETL, you would expect the practitioner to have at least 10 years of experience in business. Without that, they will not have an understanding or exposure to large enough datasets, data integration, and transformation challenges.