We are pleased to announce our new Big Data Maturity Assessment, a free, online assessment that helps CDOs, Data Architects, and other data-focused professionals get the insights they need to accelerate big data development in their organizations. Continue reading Want to Know Your Level of Big Data Maturity? Take Our FREE Big Data Maturity Assessment.
This month’s infographic features big data use cases that organizations might find interesting regardless of industry. We decided to focus on industry-agnostic use cases because, based on our experience, some of the best big data innovation happens when organizations look at how the major players in other industries are using big data and think about how they can use similar methods, tools, and/or technologies in their own industries.
Check out the infographic for big data use cases that can be applied in any industry:
Knowledgent’s Life Sciences team had a great time at Bio-IT World in Boston earlier this month. Bio-IT provides the ideal environment for organizations across the pharmaceutical, clinical, healthcare, and IT industries to share insights and discuss how emerging technologies are impacting research and development. In fact, our very own Chris Young presented with Drew Holzapfel of the CEOi on the potential of applying big data to finding a cure for Alzheimers. (In case you missed it, you can view the slides from their presentation here.)
With significant changes in both technologies and the R&D landscape, Bio-IT was buzzing with a wealth of new information. Here are three of our key takeaways from the event:
This webinar is not just for folks in the life sciences industry. It presents an interesting ideation case study for anyone interested in big data, using datasets from seemingly unrelated fields and sources, or understanding new potential ways to diagnose Alzheimer’s and improve patient care. During the webinar, we’ll discuss how big data analytical disciplines can support the global effort toward identifying key associations – rather than a direct cause – that can help us understand who gets Alzheimer’s, how we treat them, and how we prevent the disease for others by leveraging datasets across life sciences, healthcare, government agencies and academia.
For more information or to register for this free webinar, visit the event page.
Chris Young, Life Sciences Partner with Knowledgent, presented on current efforts to apply big data solutions to support the efforts towards finding a cure, early detection, and care delivery innovations for Alzheimer’s at Bio-IT World at the Seaport World Trade Center earlier this month.
Missed his presentation? You can still view the slides:
Using big data to improve health outcomes has rapidly become a prevalent trend in the healthcare industry; in fact, most healthcare organizations are already doing it. However, the promise of big data has only recently been considered for treating Alzheimer’s.
Chris Young, Life Sciences Partner with Knowledgent, will be presenting on current efforts to apply big data solutions to support the efforts towards finding a cure, early detection, and care delivery innovations for Alzheimer’s at Bio-IT World at the Seaport World Trade Center in Boston, Massachusetts. His session, “Big Data Agenda to Tackle Alzheimer’s,” will take place in the Vendor Theater on the expo floor on Wednesday, April 30 from 3:20pm – 3:40pm.
Prior to the event, Chris shared his insights on the potential of big data specific to Alzheimer’s:
With the rise of Hadoop, the demise of the data warehouse seemed only a matter of time. Surprisingly (or not), that’s not the case. Instead, organizations are looking to augment their current enterprise data warehousing solutions with the analytics and cheaper storage that Hadoop brings.
As discussed in my previous blog, data in Hadoop is mainly accessed via programming languages such as MapReduce or Python, a scripting language like Pig, or an SQL-like language (Hive). However, the skillset for data analysis most prevalent in IT shops is SQL, which means that Hadoop will have to support a SQL interface in some capacity to appeal to these people and to the widespread BI tools in existence. However, Hadoop was created to process data in a “batch” mode – you submit jobs to analyze massive datasets stored in HDFS. A number of initiatives and solutions are underway that are focused on marrying SQL with Hadoop. This convergence is truly in progress.
Most of our conversations about information management and other data-focused topics touch on data governance (specifically, what good data governance is and how to leverage it). This may seem like a good place to start; after all, if you’re trying to get the most value out of your data, you want the data to be clean, accurate, and accessible.
However, at Knowledgent, we like to look at things from another angle and ask a different question: Why data governance?