7 data visualisation insights from Global STAC Live 2020

Global STAC Live brings together CTOs and industry leaders from within data technology, ranging from experts in infrastructure engineering and solutions architecture, to machine learning and data engineering, in order to discuss the latest trends and challenges in investment and trading.

During the latest event, taking place on the 19th-22nd October 2020, Brytlyt’s Founder and CEO Richard Heyns and Business Development Director, Larry Wienszczak, spoke alongside other experts from companies such as Kx and Deephaven Data Labs, to highlight the importance of real-time decision making in the financial world, and the conversely lacking technological capabilities available to empower real-time visualisation in data analytics.

In this article, we discuss the prevalent issues that users face when working with vast or streamed financial datasets as well as the key points raised in the event exploring what can be done to solve them – from boosting compute power, to introducing advanced techniques such as AI, or changing how we approach ad hoc visualisation altogether.

Key takeaways from the Global STAC Live event:

1. Data evolution comes with opportunities and challenges

Data has quickly become the most valuable capital anyone in the finance community can have. Analysing and visualising data to gain insights has opened up vast business opportunities, empowering critical assets such as improved decision making, customer understanding, market prediction, competitive margins and more.

However, the nature of organisations’ datasets are continually changing which introduces challenges for processing. In a market where seconds count, having to compute or prepare increasing amounts of data and frequently repeat the process on new datasets is a costly enterprise. In addition, interpreting and sharing the data also results in slower time-to-action.

Real-time, or speed of thought, data visualisation is therefore a relevant and sought-after solution to stay ahead of the data evolution and competitors.

2. Users require ad hoc exploration for speed of thought insight

One way to speed up processing is to create pre-aggregated datasets or to predefine visualisations. While this can potentially enable ‘real-time’ speeds, it doesn’t enable in-the-moment questions or visualisations of the most up to date data.

Therefore, users need to perform ‘on the fly’ analysis and visualisations to truly exploit speed of thought insight. This requires the ability for fast ad hoc capabilities for both streaming and static data; including UI, filtering, sorting, creating queries, aggregation, creating formulas, and harnessing unique data.

Ad hoc analysis and speed of thought visualisation resultantly require fast compute speeds to keep up with the rate of our changing environments.

In theory, this is an ambitious, next generation ability that current solutions aren’t prepared to fully realise without extensive innovation. However, GPU technology is already offering a solution to this problem. GPU database solutions can accelerate visualisation tools and provide the compute power necessary to perform flexible, ad hoc functions based on your query or dataset at a given time, at the speed of thought.

3. The full possibilities of real-time haven’t been unlocked

While we know there is power in real-time data and dynamic visualisation, it is still completely new territory. Advanced technologies like GPUs have unlocked the ability for developers to enhance performance but users are still in the process of discovering new use cases and applications for this capability. Even the problems people are trying to solve and the surrounding dialogue are still being defined.

It’s also important to note that real-time data needs to remain connected with historical data in order to be valuable, and so exploring how to use real-time streams in the context of historical data will see greater rewards.

4. Speed doesn’t negate precision

Gaining processing speed has historically required losing detail within analysis, but modern users looking for speed of thought visualisation want both – they want to perform fast, dynamic queries to find exact and accurate answers to specific questions using big sets of data.

The compute power this requires means there needs to be a shift from thin clients to a tight coupling between client and server. This relationship enables you to send down sample data that represents millions of points, producing a graph that looks the same as one run with a bigger dataset.

Clients then have the control to interact with, render and refine these visualisations instantaneously – allowing them to hone in on the individual details and specific insights they need –  within millions of data points in a very user-friendly way.

5. Simplicity is even more important in the face of advancement

The increase in data complexity, size, and speed necessitates more intuitive user interfaces that help users to navigate huge amounts of dynamic data. This is even more prevalent with advancements such as real-time visualisation, where without the ability to easily understand, customise and control their data it would be impossible to find relevant insights.

That’s why (seemingly in contradiction), as analytics and visualisation capabilities become more complex, the user-facing element of that technology will need to become even simpler. Plus, as data becomes more important to all areas of business and therefore shared with a larger range of users – no longer limited to data scientists alone – visualisations will need to become easier to understand, transfer, and share with non-expert users.

For example, there could be a time where users are able to present, or even perform, advanced visualisations from their phones for the benefit of sharing with their clients.

6. The biggest trends gaining momentum in data

Machine learning and AI are increasingly being integrated with visualisation tools to help users identify anomalies, relevant intelligence or individual data points based on past behaviour. Data visualisation can then be used on these elements on an ad hoc basis to provide real world context. This will be a valuable capability to help maintain precision throughout advancement.

Further, SQL is being integrated with machine learning for AI in some instances to allow users to code their data and workloads.

Augmented reality is a well discussed feature that hasn’t found a concrete purpose in analytics. However, as long as there is a relevant reality underpinning it, such as the product price or company stock of a passing car, AR could reinvent data visualisation as seen on hand-held and portable devices.

Cloud services for data visualisation enable users to adopt a cost-effective, elastic data model. In the face of dynamic workloads and the reality that what you want to see tomorrow won’t be the same as what you prepared for today, scalable infrastructures enabled by the cloud will be increasingly in demand.

7. Trends to expect more of in the future

As our datasets become larger and more changeable, users will naturally demand increased interactivity and speed, more sophisticated capabilities, and flexible capacity. These demands will shape the future of technology development and implementation.

The primary goal is to continue to establish and advance all of the above trends – real-time visualisation, ad hoc analysis, AI, the cloud and so on – while not losing accessibility or responsiveness.

Based on the outbreak of activity and change in the past decade, it’s likely that we’ll be asking all new questions and wanting different activities again in five years. The one reliable prediction anyone can make is that technology will need to be more adaptable by design.

Instead of building solutions that are restricted to our current needs, we need to consider the ability to future-proof technology for the unknown demands that will inevitably arise in a few years’ time.

About Brytlyt for Finance

Brytlyt empowers speed of thought analytics for data visualisation tools. Built on PostgreSQL, our GPU-powered database can be easily integrated with your existing solution to accelerate your capabilities.

BrytlytDB is 200-1000 times faster than any legacy system. It can compute billion-row datasets in milliseconds, enabling users to mine their entire dataset without compromising on responsiveness or detail. BrytlytDB further unlocks the ability to perform ad hoc and speed of thought visualisation without pre-aggregation.

Finance use cases:

  • Monitoring banking performance and GDPR
  • Field reporting in seconds
  • Performing compute intensive models
  • Performing a range of analytics simultaneously
  • Producing individualised profiling
  • Sophisticated fraud detection
  • Informed risk management

Get in touch to discover more about how Brytlyt can empower speed of thought visualisation for financial organisations.

Recent posts