paul vidal - thoughts about tech and more

Tag Archives

19 Articles

3 essential features of the perfect software

3 essential features of the perfect software
Hi Mark. John left the company 3 months ago. Can you help us find a bug in his code? It is somewhere in this file.

I have recently been thinking quite a bit about what make a piece of software successful. Very early in my career, I got to temper my idealistic view of computer science and the world in general, quite well captured by the saying “you don’t have to be the best to be the first”. As I progress in my professional journey, I have been trying to identify key aspects that make a software go head and shoulders above their competitors or see a exponential growth in a niche of the market not exploited at the time. While it is most likely impossible to find the 3 magic words you have to pronounce to make the perfect software appear, I am still going to pretend I did this, for web traffic purposes, because I have no integrity. All bad humor aside, and for the sake of readability, I did try to funnel my thinking into 3 major aspects which, combined, are a successful piece of software. Quick aside: the purpose of this piece is not to dive into what technical aspect of a piece of software is valuable, but rather to present the software features to which the market responds positively. With that out of the way, let me present you the current state of my cogitation: the perfect software is:

screen-shot-2016-10-07-at-8-50-30-am

SIMPLE

Simplicity is essential for the end-user to open their eyes. While the algorithms, architecture and other under-the-hood building blocks can and will most likely be complex, the idea here is to present something that is simple to understand. Simplicity can be driven by multiple factors. It could be for instance the front end of your application. This is why UI/UX is such a sought-after skill, and while it is predominant in the B2C industry it is severely underused in the B2B world. It could also be driven by the product packaging. If building a platform with many potential uses, packaging them into specific solutions recognized by the industry is a fair way to achieve simplicity. It could also be targeted: focusing on solving the problems of one vertical for instance.

NON-INTRUSIVE

This characteristic is epitomized by the success of cloud computing. Software As A Service particularly is the perfect example of non-intrusiveness being a successful business model: end users do not want to have to install and maintain software. It is an obvious cost reduction feat for big enterprises, it is just as much a reality for consumers: no one wants to have to install a software on their computer, and the ones we do install are the ones we hate the most (he who had no complaint about Microsoft Word cast the first stone). Even more interesting, web software like appointment booking, ticket sales and so on are considered websites and not pieces of software, but I digress. That being said, and as I argued as early as last week, SaaS isn’t the only model and non-instrusiveness can be characterized by other traits. Backward compatibly or maintenance of current set of existing skills and application is a great way to ensure a non intrusive model. One of the reason why I think disruption is nonsense, see previous rant.

ACTIONABLE

Being simple and non-intrusive are essential qualities, but your software must actually do something in order to be valuable. The important question here is: what’s in it for your user? What is the value? And I don’t think that a value proposition such as “we are doing it better than the others” is enough, nor is “imagine what you could do with that”. You need to be able to be able to show tangible results right off the bat, drive your customer through a story of what they will be able to do now that they weren’t able to do before. This feature is in my opinion one of the hardest and most often forgotten feature for a software to possess, especially put in relation with the other two. Indeed, innovation while maintaining non-intrusiveness could be seen as an oxymoron. In reality, the perfect unique innovative actionable value that of which no one thought before does not exist. That being said, many tech companies today start by building technical prowesses instead of focusing on creating value. My recommendation is to think about the value first, then focus on making the solution simple and non-intrusive, which is ironically why this article is written in the opposite order.

Conclusion

Can a piece of software truly possess all these qualities fully? Probably not, but at I think it is at least an ideal to strive for. As mentioned awkwardly at the beginning of this article, this is also a very preliminary assessment of my thought process. I do believe that if a piece of software possess a good balance of theses 3 features, it is set for success. More importantly, I think that these features should drive the development of new softwares. I know that I have a few ideas about what to develop, and I’m going to make sure to keep that in mind.

Should all your data move to the cloud?

by paul 0 Comments
Should all your data move to the cloud?

I recently on multiple occasions engaged into conversations about whether or not Fortune 500 organizations are ready to move all their data to the cloud. While I’m not arguing about the benefits of distributed systems, I did encounter a significant number of organizations that are not ready to move to a SaaS model. Despite the obvious security reasons, I think it is an maintaining control over the core of your business to drive innovation is crucial (see Telsa example). Furthermore, many organizations’ strategy seem to be going towards build IaaS/PaaS and eventually SaaS within their own IT. These tendencies lead me to believe the dichotomy between SaaS and traditional in-house implementation isn’t absolute. Therefore, the market will see the advent of solutions enabling control over internal data while leveraging SaaS functionalities.

Since I work for a company offering one of these solutions, I wrote a white paper about it, so here it. Enjoy the read!

screen-shot-2016-09-30-at-8-36-50-am

The future of Data is Augmentation, not Disruption

by paul 1 Comment
The future of Data is Augmentation, not Disruption
I'm disrupting the light bulb market by enabling wireless. I call it "Photoshop"

I spent last week enjoying the Cassandra Summit, so much that I did not take the time to write a blog post. I had a few ideas but I chose quality over quantity. That being said, something interesting happened at the summit: we coined the term “augmentation” for one of my companies key go to market use case, instead of data layer modernization or digitalization. even got the opportunity to try both terms to the different people visiting our booth. In this extremely small sample, people really tended to have a much better degree of understanding when I used the word augmentation, which got me thinking. I even read a very interesting article from Tom O’Reilly called: Don’t Replace People. Augment Them. in which he argues against technology fully replacing people. Could this concept of augmentation be applied in a broader scale to understand our data technology trends? Maybe, at least that’s what I’m going to try to lay out in this article.

Technological progress relies on augmentation.

That’s the first thing that struck me when I pondered on augmentation in our world, and more specifically when it comes to software. At the exception of very few, the platforms, apps and tool that we use are all based on augmentation of existing basic functions: Amazon? Augmentation of store using technology. Uber? Augmentation of taxis. Chatbots? Augmentation of chat clients. Slack? Augmentation of email + chats. Distributed/Cloud applications? Augmentation of legacy applications. To some extent even Google is an augmentation of a manual filing system. I would admit listing examples that confirm an idea that I already had is close to a logical fallacy, so I tried to find counter examples, i.e. software solutions that try to introduce completely new concepts, but could not think of any. Of course we could argue over semantics in defining what constitute true innovation versus augmentation of an existing technology, but ultimately I think it is fair to say that the most successful technologies are augmenting our experience rather than being completely disruptive, despite what most of my field would argue. Therefore, augmentation must be at least considered as part of the future of any software industry, such as the Big Data industry.

Augmentation is better than transformation

Human nature needs comfort, that’s why most of us prefer augmentation over disruption. By disruption, I’m talking about transforming or replacing the existing systems, not adding features: selling unpaired socks over internet is not disrupting the sock industry, despite what the TED talks would like me to believe. Seriously, when you have existing technologies, as every company does, a replacement/transformation is a hard pill to swallow. Loss of investment, knowledge, process, etc. It is especially risky and complex when talking about data layer transformation, as I argued before in this very blog. So when given a choice, augmenting existing data layers is an obvious choice for risk-advert IT organizations.

Augmentation drives innovation

Perhaps the most convincing argument towards acknowledging that augmentation is the future of data is the analysis of the most innovative big data software solutions: machine learning, neural networks and all of these extremely complex systems which behaviors are almost impossible to predict, even for experts. These systems are design to augment their own capabilities, instead of having a set of deterministic rules to follow. Indeed, these systems are designed to approach the capabilities of complex biological systems and therefore incorporate their “messiness”. We can’t think of big data systems using physics thinking (i.e. here is an algorithm, here is a set of parameters, this is the result expected), but we should rather rely to biology thinking (i.e. what is the results I get if I input this parameter). A great example of this type of thinking is Netflix’s Chaos Monkey, a service running on AWS to simulate failures and understand the behavior of their architecture. Self-augmentation is the principle upon which the technologies of the future are built. We understand the algorithms we input but not necessarily the outcome, which can have unintended consequences sometimes (see: Microsoft Tay), but ultimately is a better pathway to intelligent technologies. I’m a control freak, and not being able to understand a system end to end drives me nuts, but I’m willing to relinquish my sanity for the good of Artificial Intelligence.

Conclusion

With software Augmentation being part of our everyday life, a safer and easier way to add features to existing data layer, and the core concept of machine learning, I think it is fair to say that it is the future of Data. Did I convince myself? Yes, which is good because my opinion is usually my first go to when it comes to figuring out what I think. Seriously though, what do you think? As always, I long to learn more and listen to everyone’s opinion!

Is patience overrated? how real-time big data affects our behavior.

by paul 0 Comments
Is patience overrated? how real-time big data affects our behavior.
soon.

If you haven’t figured it out by now, I’m fairly action driven. One of the skills that is often pointed out to me for a lack thereof is patience. If you want an illustration of my personality, I encourage you to read this comic:

woah-woah-woah-slow-down-friend-dontcha-know-that-sometimes-you-have-to-stop-and-smell-the-flowers-flower-smelling-champion-type-a-type-b-1471399276

Thank you @shenanigansen. Seriously, I have a problem. My cousin would tell me I need to do yoga, many of my friends would tell me I should practice mindfulness, and my Dad would tell me I should be patient. Here is my take on it: I think patience is overrated, and I think that it is the result of the technology we have at our disposal.

The advent of real-time in Big Data

A couple of years ago, the selling point of big data was the big in big data. Being able to store practically unlimited amount of data was a game changer. But if you look at the recent trends (see a few excerpts here, here, and here), real-time and speed are selling points. People want access to their data quickly, and I can tell you it is a major part of my every data pitch. To be fair, the shortening of time of any part of your life is a trademark of the modern era, as much as hipsters are trying to fight it (typewriters, anyone?). However, I do think that accelerating big data access and storage has and will continue to be one of the trends that impacts that acceleration the most. Indeed with the luxury to record everything in our lives, through IoT or simply by being a normal being that spends a significant portion of his time in the virtual realm (a.k.a. surfing the web or playing video games), real-time is the next game changer after personalization.

What that means for us, the end user

We are already a product being sold by any social media, site, fitness tracker or video game. And we already see the outcome of this by targeted ads, suggestions and so on. But these suggestions can be a bit off at times (think suggestion about something you already bought), on the account of the algorithms needing more iteration, but also by the lack of sufficient data, not because of pure lack of it but by the latency of gathering all these pieces of data together. Imagine what speed can add to these phenomena. The accuracy of the suggestions will be at times frightening, but mostly we will become more impatient. And we already see results of that. An recent example would be the reaction of retailers about using cup readers, due to their processing time. We’re talking about a few seconds of difference, but it matters to us, the end user. Personally, if I can’t use contactless payment and have to pull out my card like and animal, I’m annoyed. And this trend will continue folks, make no mistake.

Conclusion & Limitations

So why should I be patient? Why should I have to wait for a specific outcome? The frustration comes from the fact that many situations for which you are impatient are not limited anymore by logistics themselves but by inaction, at least in the business world. But my point is the following: the world of data and therefore to some extent our personal world is moving to real-time. You can decide to be an outsider and there is of course value in this, or you need to adapt. The value of waiting for a possible different situation is overrated. For instance, let’s say you have to make a life changing decision. Chances are that the amount of data that you will have to make that decision now versus 3 weeks from now is going to be roughly similar. So why not take the decision now? Why be patient?
Of course, this may sound like I’m advocating for having results now now now now, like a 3 year-old (and I talk from experience). This is not it, valuing highly hard incremental work towards a long term goal is extremely important, but patience as an excuse to inaction isn’t.

5 reasons you should go to the Cassandra Summit 2016

by paul 0 Comments
5 reasons you should go to the Cassandra Summit 2016
I did a search for summit on royalty free images and this is what I got.

For those who don’t know, I’ll be attending the Cassandra Summit 2016 in San Jose (possibly talking, but this is still in the works). The Cassandra Summit is organized by DataStax, the Cassandra enterprise company with which my company, K2View, is a partner.
I’m super excited about this summit, and participated to last year’s edition. I thought I’d share the excitement by writing a total click-bait of an article, expressing my genuine feelings of excitement. Seriously, I am excited about this summit. Of course my judgement is biased by the fact that I am part of the show, but I would not be working for who I am working now if I was not honestly passionate about this technological environment and the events that surround it. So allow me the right to be a nerd and share this with you: 5 reasons you should go to the Cassandra Summit 2016.

1. To learn about market-leading technologies

Like the paradoxical man would say, it goes without saying but it’s better said than not. Obviously, this should be the first thing you look for when attending that kind of summit. First, Cassandra and DataStax Enterprise are used by companies that are the leaders of our day to day technological life (e.g. Netflix, Apple): at this summit you get to talk to the guys that implemented these clusters and understanding their deployment is always fascinating. Perhaps even more interestingly, you get to learn about new companies leveraging Cassandra in use-cases you never thought about. If you play your cards right, you should be able to overload your brain with new information, which is always a good feeling.

2. To listen to people that are smarter than you

Granted, this is not very hard for me. Take a look at the conference agenda and the speaker list though. I have a professional crush on Patrick McFadin, Chief Evangelist at DataStax, who was my first encounter with Cassandra. I really y enjoy its delivery, and always have fun listening to him, but he is one of many for that conference.

3. To genuinely connect with other data nerds

With our (professional) lives going at 100 miles per hour, we don’t get a chance to stop and tell someone: the gossip protocol is one of the coolest things. If you try to tell that to someone that does not work in the field, he probably won’t know what you’re talking about; if you try to tell that to someone in your field, it either comes out as a platitude or you simply never get time to enjoy a very nerdy conversation. You get to do that at the Cassandra Summit. If you’re participating, grab a beer and a snack and come talk to me about anything you find cool, I’ll listen.

4. To witness cool logistic hacks

Two words for you: whiteboard tables. This blew my mind last year, being able to doodle with a marker on the very table you sit at is amazing. Why isn’t every conference room table a whiteboard? I will never know. I can’t wait to see what cool things the organizers will come up with this year.

5. To have fun

Look. Work is arguably the largest part of our lives outside of sleeping, It is not every day we get to be in an environment full of new exciting information, surrounded by extremely intelligent and passionate people, where everything has been thought of to the last detail. I like to think of it as an all-inclusive resort for data nerds. I’ll be damned if I don’t enjoy every minute of it and so should you, so please, enjoy yourself!

Becoming intimate with Big Data

by paul 0 Comments
Becoming intimate with Big Data
Come on guys, we are all made of blue glass inside.

About a year ago, I had a chance to have a discussion with one of the smartest person I’ve ever met, currently a board member of our company. This man has not only built his fortune out of nothing by being able to identify trends in the market and position his companies accordingly, he is also a genuine human being that forces admiration. But I digress. During this conversation, he mentioned that one of the things that helped him succeed was his capacity to understand the intrinsic values that define a generation. As an example, he mentioned that his generation, during the 90s was all about financial success. The following generation, the 2000s kids was all about fame (big brother anyone?). Then he told me that he was yet to figure out what my generation was all about. Since then I have been able to understand what makes my generation tick. After about a year of poking around, I think that I found the answer: my generation is the selfish generation. We are all selfish and think about our individuality. Look around, it’s selfies, freedom above all, my Facebook or my privacy, my right for an opinion, my right for an outlet to express my idea. I’m including myself in this of course, I am writing a blog after all. What’s interesting about this realization is to understand the consequences it has on the market, and specifically in a domain in which I have at least a bit of expertise: Big Data.

Big Data is driven by the individual

In a recent report from Forrester (link), companies were asked “Which use cases are driving the demand for continuous global data availability at your organization?”. The most common use case representing 52% of the answers received was 360-degree view of the business, product. This means that more than half of the big data drivers are coming from the consolidation of data to represent an individual unit of business. Make no mistake, in many cases, the product is you. What drives big data is the intimate knowledge of the individual. This makes perfect sense if you agree with the premise of my first paragraph: big data, and the market in general wants to cater to the selfish generation, and therefore is implementing solutions to know each individual personally.

This report is only one of numerous examples corroborating what I’m trying to explain here. We see machine learning algorithms and data scientists arguing about what algorithm is the best to target individual with the right add. IoT is tracking and personalizing every aspects of our lives. Anecdotally, I even witnessed the re-naming of a data analytics team in a large company to “Your Data”.

What does this mean for your Big Data implementation

First you need to consider that in order to be able to keep a relevant edge on your competition, you must be able to have access to a solution to individualize your data collection. I have expressed this opinion quite a bit, but I believe that ultimately individualization of data is a use case that requires its own solution. There is no magic end to end consolidation platform that will do everything. You need to consider a big data individualization platform, as opposed to a big data generic platform that you then try to morph in order to cater to your individualization needs. Once implemented, this data individualization platform can be leveraged to implement further features like real-time provisioning, data virtualization, personalized analytics or real customer centric support, but your platform must be intimate with your unit of business first.

Essential resources on Machine Learning

by paul 0 Comments
Essential resources on Machine Learning
"Maybe you should be spending some time learning instead of relying on machines" - Some hipster

I’ve always been fascinated by Artificial Intelligence in science fiction. I’m so lucky to live in an era that is seeing the birth of a new kind of Artificial Intelligence, enabled by Big Data, advancement in super computers and Machine Learning. I’m even working in a field that gets to implement that kind of technologies, which continues to excite and fascinate me. Machine learning is today moving out of the realm of pure research to real-world applicability. But like any new cutting-edge technology, we need to beware of products untruthfully using the word Machine Learning in their marketing message or Machine Learning being the cure for all diseases. Therefore, I think it’s important that we spend some time understanding what Machine Learning is, as well as what it does and can do in the industry. Since I’m not an expert on Machine Learning (… yet), I spent some time gathering resources to enhance your Human Learning about Machine Learning. Happy reading!

Introductions

  • First things first, wikipedia: link
  • An excellent visual introduction on Machine Learning from R2D3: link
  • An early draft of a Machine Learning book from Stanford University: link
  • Introduction to Machine Learning from Cambridge University: link
  • Technical courses

  • In-depth videos on Machine Learning, from Data School: link
  • What is Machine Learning, from Data Camp: link
  • Introduction to Machine Learning, from Udacity: link
  • Machine Learning, from Coursera: link
  • Machine Learning in the market

  • Gartner 2015 Hype Cycle: Big Data is Out, Machine Learning is in: link
  • Gartner 2016 top 10 trends: link
  • Machine Learning, What it is & why it matters, from SAS: link
  • Marketplace for Machine Learning Algorithm, Algorithmia: link
  • The future of Machine Learning, from David Karger on Quora: link
  • What is a data scientist and do I need one?

    by paul 0 Comments
    What is a data scientist and do I need one?

    A good friend once told me: “If your profession is not represented by a cartoon animal, your job description is made up and society does not need you”. This is when I realized that my life was a lie and I was condemned to eternal despair. But I digress. On the subject of data scientists, this is a role that has recently been introduced to the market place, so I think it’s important to ask ourselves what this role is and who can benefit from a data scientist.

    The evolution of data

    In the recent years, data has experienced profound changes. Not only the technology behind data storage and management has dramatically evolved from standard relational models to distributed solutions, the place of data in the enterprise and in the mind of people has changed. Suddenly data is becoming a sexy buzzword instead of being a necessary evil. Indeed, data has become its own entity within business organization with entire teams dedicated to it. Companies now no longer ask “do I really need to keep this data?” but “how can I make sure that I keep all the data I have?”. With this advent, new roles started to emerge and this is when “Data Scientists” have been introduced.

    Buzzword or actual role?

    Many argue that data scientists are just a fancier replacement to the role of business/data analysts; “A Data Scientist is a Data Analyst Who Lives in San Francisco” as you can read in this article (a very good read I might add). I agree to a certain extent: data scientists are people that are diving into software to get results that will ultimately help make business decisions. Companies leadership have always relied on this type of analysis from experts called business analysts. Business analysts even use business intelligence software to do data mining and generate statistics and guide business solution, which are some of the principal prerogatives of data scientist.

    But I do think there is a fundamental change to be considered: data platforms are now a separate piece of software. Before the advent of big data, software used data layers. Nowadays, you have data lakes, data virtualization layers, real time data warehousing that are their own entities. Using these platforms require a combined set of skills: know how to use data platforms intimately (skills formerly owned by data administrators) and be able to generate business intelligence data out of them (skills formerly owned by business analysts).

    As such, I think that a new designation for this combined set of skills is fair; and it looks like Wikipedia agrees with me by calling data science an interdisciplinary field: “Data science is an interdisciplinary field about processes and systems to extract knowledge or insights from data in various forms, either structured or unstructured, which is a continuation of some of the data analysis fields such as statistics, data mining, and predictive analytics, similar to Knowledge Discovery in Databases (KDD).”

    Do I need to hire a data science team?

    I think that there is a better question to be asked: “what am I doing with my data?”. Don’t get me wrong, the trend of wanting to accumulate as much data as possible is great. Especially great for me that work for a company that provides data management solutions. But I have seen implementations of massive data lakes taking years and month and very little use out of them, and this is a shame.

    New data platforms gives business a tremendous opportunity. Instead of relying on the wisdom of visionaries or accumulated experience to make difficult business decisions, we get to gather evidence and make an informed decision. But you need to know what you want to know first. Once you do, then you can decide which platform is good for you and what type of data scientist you should hire. This will give you much more tangible results than buying a huge data platform, hiring an army of data scientists and do fundamental data research. OK, I made that last part up, fundamental data research is not a real thing… yet!

    5 reasons why software consolidation always fails

    by paul 0 Comments
    5 reasons why software consolidation always fails
    INSTRUCTIONS WERE UNCLEAR

    Let’s start with a dare: I dare you to go to any large corporation, find an IT architect and ask them to give you a diagram of their complete architecture. I honestly think that they will politely ignore you, but for the sake of argument, let’s assume they are able to have access to this end-to-end architecture and that this architecture is accurate (and that you can find a screen or a piece of paper that is big enough to fit all of it in one page); by looking at this diagram, you will quickly understand why software consolidation is a very appealing proposition: multiple pieces of software serving the same purpose, duplicated teams, disparate processes… Think of all the money you can save if you buy this giant universal platform that everyone will use and will give you complete control over your IT!

    Except that never happens. This giant convergent platform never gets implemented, even if it restricted to a certain functional vertical (e.g. billing, ERP, etc.). So why can’t we consolidate pieces of software into one? Let me give you my two cents.

    Note: Hopefully the example I gave speaks for itself, but let me clarify the context of this article: I am specifically addressing software consolidation for very large organizations; of course if your organization employs 10 people and you’re all using google apps then this does not apply to you.

    1. Large systems are complicated

    This goes without saying but it’s better to say it: the answer to the ultimate question of life, the universe, and everything is fictional. Seriously though, it is so complicated to imagine a solution that would cater to the need of every company and every use case is ludicrous.

    2. Enterprise softwares are outdated

    While we can all agree that a universal solution is a utopia, this does not mean that you can’t create a solution that gives a large percentage of the solution, is what the smart guys at big enterprise software companies must have thought. To cater to the remaining few percents, customization can be added, (for a fee, charged by the software provider itself). And they have. These large enterprise software implementation have become colossi (at least I think that’s the plural of colossus) that are really hard to move: they are gigantic, expensive, slow-responsive and use backend technologies from the 70s.

    As a result, these platforms become engorged and most of the innovation around them is about managing them more efficiently rather than offering a competitive advantage against the rest of the market. Let’s be clear, I’m not saying big enterprise software is dead, they are necessary.

    But in an established competitive environment, you distinguish yourself by fighting for the edges, which means fast reactivity, which is incompatible with these outdated massive implementations.

    3. Companies need solutions not platforms

    How does one find its competitive edge? By implementing efficient targeted solutions. And as far as I could witness, this trend does not seem to be slowing down, quite the contrary (which I believe is a very healthy response). However, the multiplication of targets solution contributes to rendering the consolidation problem even more complicated and necessary.

    4. Budget and learning curves are real constraints

    Again, this might seem banal but is worth saying. An enterprise is driving a team of people, with their own expertise and responding to the demand of the market. Any change has a cost upfront and downstream, especially when replacing a well-known software as part as a consolidation effort.

    5. Consolidation softwares aren’t business driven

    In this realm where a single solution does not exist and businesses tend to purchase more and more specific solution, data consolidation platform flourish. Unfortunately, in order to cater to the complexity of the systems we’re dealing with, they are often driven by the underlying technology and not the business requirements.

    This sounds a lot like business jargon, so let me explain this with an example: your software relies on its data back-end, and if you have tried to consolidate multiple back-end systems together, whether you use a traditional or distributed data platform, the first thing you end up doing is designing the data schema of the platform, then implement a way for the data to move from multiple backends to this system.

    This is not the way your business want to see consolidation. Your business has a clear idea of what is the most important entity from which they can gain insight (for example analyzing user or customer behavior). This means that your consolidation platform schema needs to always be able to adapt to your business and not your business to try and fit into a schema.

    So what’s next?

    Software consolidation has tremendous application in giving insight to any business owner. But it needs to be a solution, not a generalized overhaul of the IT eco-system. Therefore I think it requires a good data virtualization solution. This solution must have at least the following qualities:

    1. Be business oriented
    2. Be able to publish fresh data on demand
    3. Be flexible enough to interface with any new element of the IT eco-system
    4. Be able to handle any amount of data
    5. Be able to publish results using known methods (using standard connectors/languages)

    Of course, I work for a company that provides all these capacities, but that does not make my analysis unfounded. I would not work for a company if I didn’t believe it provided something truly unique and needed by the market. I genuinely believe that this type of solution will be the cement of the future IT eco-systems.