October 9, 2020

ETOT 2020: Scalefocus Leads a Discussion on Effective Data Management

46 likes

From October 5-6, 2020, Commodities People held the 12th Energy Trading Operations & Technology Summit as a virtual event focusing on innovation, resilience and digitalization in the energy market.

As one of its Gold Sponsors, Scalefocus took part in the event, and had the opportunity to network and hear from over 50 speakers on topics from adapting to Covid-19 challenges, intelligent automation, RPA in the back office, algorithmic trading, trade surveillance and more.

Goran Stojanovski, Head of Data, Scalefocus led an interactive panel discussion on Effective Data Management and was joined by Kevin Kindall, Senior Data Scientist, Hartree Partners and Volodymyr Sorokoumov, Data Innovation Leader Digital Trading, Uniper. In this post, we share the highlights of the virtual session and the current topics surrounding effective data management in the energy trading industry.


IoT and predictive analysis

We’re currently witnessing a boom in the IoT market, with smart technologies connecting physical objects to digital platforms, helping us discover crucial patterns, gain valuable business insights and discover new opportunities. Gigabit predicts that companies will invest a total of $15 trillion by 2025, and our world will be connected by 3.5 billion IoT devices in 2023 with Asia leading the way (Forbes).

In the energy industry, IoT is able to provide predictive analytics networks, with the potential to reduce wastage and energy distribution losses; offer higher cost efficiency; improve staff productivity; and help make smarter decisions in real-time leading to positive business outcomes.

It’s crucial to note however, that merely collecting data from IoT’s many sensors is not enough. The data needs to be processed and analyzed before making use of its value, especially when it comes to trading. Fundamental analysis of commodity markets (the study of their supply and demand), while leading to informed business decisions, also depends on timeliness and quality, and is very data intensive, as Kevin explains.

“There’s the never-ending quest to try to find things that have predictive value. And for those of you who work in the power markets here in the states, in PJM, when you get into the load auctions or you get into the FDR problem, it becomes very difficult very quickly. You have large amounts of data. You’ve got optimization problems you have to solve. You’re trying to model your grid stack. You’re trying to model future load demand, and you’re trying to model the network topology. And you have to get it all right in order to make informed decisions and to come up with an auction strategy.”

Apart from seeing many energy trading companies rely on digital platforms, forecasting tools, and the cloud to transform and scale their data, technology may not be the only factor for success, as Volodymyr points out. According to him there are three necessary components to reap the full value of data in an energy trading organization: the technology of people and culture; a stable governance framework; and maintaining the business operations.


Human vs. algorithmic trading

Considering the advantages of high precision, speed, and the instant detection of changes and trends in the energy market, we are seeing an increase in transactions made by algorithmic trading. The process involves computers that gather and evaluate the data, place an order and continue searching for profitable new opportunities. So where does that leave human traders?

Well, they surely won’t be eliminated anytime soon, according to Kevin. Human traders possess valuable commercial knowledge which is learned on the trading floors, and with decades of experience, traders have seen many possible outcomes and are able to react accordingly. Machines have not yet developed a deep understanding of longterm business processes and the ability to form human relationships which both help deal better with volatile markets.


Shifting the focus to quality

So we acknowledge the need of balance between technology and human intervention when it comes to predictive analysis and trading, but how do we measure the quality of accumulated data? How do we trust the results? Volodymyr suggests that the answer may lie in having a flexible framework that allows for prototypes and implementation of various use cases.

“It should be looked really at use case at the stage of the data lifecycle and basically support from the inception of the idea through the prototyping all the way down to industrialization and product sterilization. And depending at which stage you are with your use case with your data product, you will have different requirements to the quality of the data. And I think the organizations need to support all the stages of these cases to make sure that they can innovate, fail fast, try many ideas, many hypotheses, test them and invest in those that are most promising that can deliver value to the organization.”

Meanwhile, Kevin shares that data processing and cleaning prior to data analysis is different for front and middle office. Front office requires very clean data, and requires a great effort to clean. In middle office, large capture systems must capture the trade information correctly the first time, including the set of prices necessary to value those trades. It becomes a challenge when dealing with thousands of forward curves, and keeping in mind the reporting engine, risk engine through which an inner error can seep through.

When confronted with bad data, Kevin responds in one of two ways. He either changes the architecture and subsequently the process that’s creating the error, or he uses robust mathematical estimates to prove that below a certain threshold, there isn’t going to be such a major effect to the answer. With his internal customers, who make decisions based on risk limits, slight answer changes do not affect their decisions.

“In the energy industry, it’s very common to see original data to be to be pretty dirty. And yes, there is a lot of effort that’s spent cleaning up and trying to make things better because it does influence you, the value that you’re trying to estimate or the decisions that you’re going to do.”

 

Building agile AI and ML solutions

Apart from aiming for clean data, when it comes to AI and machine learning what we need is an effective architecture that’s specifically designed to work with these technologies. Kevin advises organizations to find experts that understand their unique business problems, and build the platform around solving them.

“It fundamentally comes down to our personnel problem. We’ve all seen instances where we build better relational databases and then layer on top of that. And rather than something becoming a platform that enables a better calculations and better data access, it starts to become a constraint. And so what we don’t want to see happen is as these new tools are rolled out and so on for that same pattern to repeat.”

For Volodymyr, improving efficiency when bringing machine learning models from lab to production is what still remains a challenge. He suggests creating a curated feature repository instead of using the one pipeline model.

“We can establish the one central data foundation for all pre-calculated, trusted variables, trusted features that can be reused in multiple models and multiple machine learning pipelines… But we also have a trusted foundation where we can collaborate more easily and leverage all the artifacts, all that work that has been done in the previous project, also in the future projects.”


Migrating to the cloud

By the end of this year, it’s estimated that 67% of organizations’ infrastructure and software will run in a cloud environment, while the public cloud market will have reached over $266 billion (TechJury). Many companies are realizing the benefits of migrating to the cloud — cost efficiency, scalability, data security and mobility just to name a few.

Volodymyr reminds us that despite the unlimited data storage it provides, it’s important to consider the collaboration aspect, and making an effort to avoid silos within the cloud. In order to extract the real value of the data, there should be a strong governance framework, data culture and clearly defined business goals of the projects.

And the cloud does let us standardize to a great degree, but we should always ask, does it directly reflect the needs of the organization? How is it going to be used so it’s beneficial? Kevin adds:

“There’s a tremendous amount of potential in the cloud, but it has to be done in such a way where it passes the use test so that when your traders and your internal customers see this, they obviously want to migrate to the cloud as opposed to what they currently have because it’s so much better.”


Developing a data management strategy

Prior to developing a successful data management strategy, effective data access should be ensured. Organizations should strive for data that is both in a useful form and for timeliness, leading to good decision making.

Being able to make changes quickly — either due to source systems or trends, and integrating the data fast is important. There should be complete visibility of data flows and knowledge of the processes behind where and how it is used. Below are four key steps which should be a part of your data management strategy.

  1. Focus on the people: Encourage and promote data literacy, upskilling and knowledge sharing.
  2. Implement the right technology: Have the right tools and frameworks that support various use cases, and are suited to the specific needs of engineers, scientists and analysts.
  3. Understand the business needs: Assess various use cases and be aware of the value each will bring to your business.
  4. Cost efficiency: Ask yourself how your business needs can be met with the least amount of money.