Hello, good morning, good afternoon, good evening. This is Matt Eddy, and welcome to another Refinitiv Real-Time customer webinar. We're the 25th of January. If you're like us, then time's flying by already, and it's like we haven't even stopped for the new year. In that spirit, we've got 90 minutes of a action-packed agenda, as we always have. If I can just step through to the next slide. What you see on the screen right now is all of the presenters, hopefully in the order that they'll be presenting. You don't need to remember this, as always, we will share the presentation with you after the fact.
You'll have all of our names if you wanna reach out to anybody that you hear presenting today over the next 90 minutes. Next slide, please. You might have also realized is we're using a slightly different platform to previous events. Wanted to spend a couple of minutes just going through the housekeeping. Hopefully everyone kinda got familiar with what the platform looks like. You've got your main panel, you've got your active speaker, but what you'll also have on the right-hand side is another section that's got our Q&A section and also our polls. That is powered by Slido. It's embedded into this dashboard, so you can find that. If you exit out, then you can pull it back up.
You can also log in through any other device, whether that's another laptop, through a mobile device. Just go to slido.com, put Real Time Q 1 in. You can post any questions there. We've got a bank of moderators that will look through that, make sure they're all appropriate and fit for work, then they'll post them up. What we'll be doing, I think we've got about four polls to go through, we'll direct you across to the right-hand side, or onto your device to vote for those and to feed that back. Importantly to note, we are recording today's session, so it will be available offline for subsequent entertainment. We'll also be sharing all of the material.
Any past webinars or anything to go by, all of the questions we don't have a chance to answer in the next 90 minutes, we will wrap those up into a document, then we'll post those out as well. Any issues, refresh your browser. Hopefully, it should be stable, we should be able to move on. Next slide, please. Okay. With that, the agenda, we've got a lot wrapped up. We're covering off what we normally cover off in terms of our RTDS roadmap. More specifically, given time, we're gonna focus a bit more on some of the capabilities that we've got in 3.6.3, we'll have a little demo of some new capabilities there.
Making a first appearance at the webinars, we've got Luke O'Sullivan, who is the product owner for Real-Time – Optimized. He's been spending a lot of time working with our platform team, working around our new identity access management, and the changes in the behavior and also the kind of the usual experience changes you'll see, and a lot of enhancements, hopefully. We think you'll enjoy that. Luke will do a bit of a walk-through of that and a demo. We're then gonna talk a little bit more about our APIs and our end-of-life strategy, which we spoke about previously. We'll give you an update on where we are with that.
Also talk about a new capability or, and maybe a capability that not many people are aware of in terms of the data library for Python, and Olivia will walk through how simple that is to use and some of the use cases around that. Jeff and Ted will go through what's new in DACS 7.9, and Ted will walk you through some demos on that as well, and some important information and some changes in the support. Last but not least, Vesna will wrap up with an ATS roadmap update from what we've done in 2022, but also importantly, what we're gonna be delivering in 2023. Without further ado, because like I say, we got a lot on, I'll move on to the next slide, and then we'll move across to Jeff.
Like I say, use the question and answer functionality through the course of the event. If I see any questions I think are important, I'll try and drop them into the conversation, or we'll try and wrap them up at the end or in a subsequent follow-up. Okay? With that, I will hand over to Jeff Sewell, if we move on to the next slide. Thank you.
Good day, everyone. Thank you, Matt. Good day, everyone. Thank you for joining us for another customer webinar. Initially, I'm gonna run through the large enhancements on the roadmap for version 3.6.3, which we just released last month. You'll see the first three lines of the roadmap that we added three large enhancements with this release. Time-based preferred host fallback is the top line, first large enhancement. Your preferred host is generally going to offer you the lowest latency or route that is the lowest cost to connect to. When preferred route is not available, you'll define alternative servers and routes to fail over to for resiliency. Prior to 3.6.3, the process to reconnect back to that preferred host when it becomes available was a manual process.
Now with 3.6.3, there's a new feature that provides you the ability to automatically fail back to the preferred host when it becomes available at a designated date and time, which you define, which is likely to be obviously after market hours or maybe even over the weekend. I think most of us would agree the ability to define a designated time to automatically fail back to your preferred host is better than requiring a manual intervention. After all, the reason you have a preferred host is for one of the benefits I mentioned. Reconnecting back to your preferred host is something that you can now configure to happen automatically.
As soon as I run through the rest of this page, I'm gonna have Parivat run through a demo of the new feature for time-based preferred host failback. Right. On the second line, we have support for field filtering for a range of fields. Let's face it, sometimes a record has more fields of data than you need, which consumes extra processing overhead and bandwidth to disseminate. Field filtering allows you to select only the fields of a record that are important to you and then simply drop the rest on the floor. You can select the list of fields you require by adding each and every field identifier or FID to the list. This, as many of you know, can be a very arduous process.
We came up with this new feature that allows you to designate a range of fields within the full record, by using a hyphen or a dash between the first and the last FID of a particular range you wanna use. The benefit is simply the ability to define a range or filter a range of fields using this hyphen rather than typing in each individual field. It'll reduce time and overhead it takes to designate each, you know, the list of full fields that you want included. All right. Number three here, is the evaluation of serverless in the cloud.
Although we began offering Real-Time Optimized data feed in 2018, there have been many other use cases and many other years before we got involved in the cloud with customers that had different use cases and different requirements and different infrastructures. We have not been using serverless, but we had a request to run the real-time connector in a serverless environment, which is called Fargate at AWS. We have completed the evaluation at this point, and the ability to run RTC on Fargate is promising, but there's still a couple of outstanding issues to be addressed before we can actually qualify it and offer category A support, which is our highest level of support. The throughput has improved significantly. That's the main thing that we're looking for.
We're still seeing some inconsistencies. Just so you have a little bit of an idea. There's some performance issues here we're seeing on occasion and some dropped connections. Again, once we're working with the Fargate team, and once we are able to address those deficiencies, we will be able to offer, you know, fully qualified category support, and hopefully that'll be by the next time we have another webinar. All right? The last line on this table indicates that there's a number of small enhancements and bug fixes that are included with each release. The what's new for 3.6.3 is available to download from My Refinitiv and includes a succinct description of each enhancement, large and small, and the bug fixes.
Now I'm gonna turn this over to Parivat, and he'll give us a demo of the time-based preferred host failback. Over to you, Parivat.
Thank you, Jeff. Hi, everyone. My name is Parivat Pongpaisach, RTDS software developer. Today I am going to demo the preferred host failback feature, which was introduced in the latest version of RTC and ADH 3.6.3. Before we get to the demo, let's go through one slide real quick. Next slide, please. This feature is specific to the setup where there are multiple hosts within a route. Right. In the diagram here, I have host 10 and host 11. Whenever there is a failover, typically the RTC will attempt to connect to the next host in the host list, which is host 11 here. This behavior has been there for years. Nothing has changed here, right?
As Jeff said, this new feature basically allows the RTC to automatically failback to the preferred host based on your choice of configuration. That's kind of the overview. Let's get to the demo. I am going to share my screen. In my setup, I have two test servers running on different hosts. At the left one is my preferred host, and the right one up top is my non-preferred host. Let me show my RTC configuration file. It's pretty simple. I have only one route, which is host SSL. To enable new feature, I set enable preferred host to true. For my host list, I have two hosts configured, 17 and 16. The first one will always be the preferred host. Right. For the preferred host failback mechanism, I have two choices here.
The first one, I can specify a specific date and time that I want my RTC to failback. This is configured in cron job format with the preferred host detection time format config parameter. For the other option, right, I can just simply use a time interval, and this time interval will be the time TV at that my RTC will theoretically attempt to failback to the preferred host. Let me start up my RTC 3.6.3. My RTC is up. It is up. Right now it's connected to the preferred host. If I go to the source route statistics screen, I can see that the new feature, which is a preferred host fallback, has been enabled here. I set the interval to 30 seconds.
I'm going to simulate a failover from my RTC...b etween my RTC and the preferred host. I'm just gonna simply kill my test server running on the preferred host. As usual, my RTC should fail over to the next host in the host list. If I look at my RTC dot log, I can see that right now my route host SSL is connected to a non-preferred host. Preferred host failback timer has been activated. We attempt to failback every 30 seconds. Then I'm just gonna bring my test server running on the preferred host back up. Within 30 seconds, my RTC should failback to the preferred host. Here we go. My RTC just failback to the preferred host. This conclude my demo for today. Thank you, everyone.
Thanks very much, Pawel. Hopefully that was quite straightforward to everybody. Again, this was in the 3.6.3 release that came out in December. The more you play with that, I think the more you'll be able to experience what's going on. We'll move across to a polling question. Something that we've been thinking about how we can best supply different software put into the hands of our end community. There's a question on the Slido poll right now. We update our real-time connector image on Docker Hub quarterly. Is it up to the customer to perform their own security and patches, as in sort of, you know, the Red Hat patches, you have to localize that.
Do you use our Real-Time image or Real-Time Connector image, or do you build your own container? We've got... I think you can pick from, as many of the options as you want. No, you didn't know it was there. No, we're not allowed to do that. We've had some customers that say they're not allowed to get access to Docker Hub from their corporate network. Yes, we do use that, but we build our own specific Docker container. Yes, we use the container that we download from the Docker Hub, and pretty much that's the one that we roll into production. Those are your four options. It might be you do a mix of, mix of one, two, three, four.
If you have a vote on that, and then we can hopefully somehow see the results. Again, the poll's on the right-hand side of the screen, or if you're logged into Slido, then you can see them there as well. All right. We've got a couple more votes coming in. They're kind of coming in slowly. We'll give it another few seconds. Then we'll close that poll, and we'll have a look at it once it's up on the screen. Yeah, can we close that poll and then can see the results? See what they look like?
Hi, Matt. It's the LSEG moderator jumping in here. Just want to let you know that we won't be able to show the answers live on this screen, but just wanted to read those out to you. We've had several votes coming in from this. On the most popular choice, it was actually an equal tie between all the audience members not knowing it was there and also not being allowed to download the RTC ADH or ADS containers at work. Then in the last place was using the Refinitiv RTC container to download from the Docker Hub with 7% of the vote. It was an equal tie really between the two answers I just read out at the beginning there.
Okay. Thanks for that, Matt. Yeah, I think that kind of correlates to what we're seeing as well. There's some activity, we wanna make sure we're doing the right thing. If you've got any ideas or any suggestions on that, do put it into the Q&A or reach out to myself or to Jeff directly and let us know what's stopping you from taking advantage of that capability. 'Cause it is quite powerful, it does probably save you time in the long run as well. Okay, thanks for that. Shall we move on to the next section, please? Thank you. Mahesh, I'll hand over to you. Thank you.
Sure. Thank you, Matt. Hello, everyone. I'm Mahesh Bommanayakanahalli. I am the Development Manager for RTDS. Next slide, please.
Typically we have shared our performance metrics covering the various connectivity protocols like RWF to RWF, that's like a JSON and SSL, you know, full fan out, how many connections can you make assuming a fixed inbound message rate, conflation, the cost of encryption, the snapshot and the regular image retrieval. As you have looked at the last two versions, we have added more capabilities. Now there is REST snapshot. We have made some improvements to conflation performance. There is encryption for REST and WebSocket in addition to the RTFL that was always there. We have introduced channel threads, and then we support RWF over WebSocket in addition to JSON2 over WebSocket. And then we have improved the WebSocket, the common view performance, in the last 3.6.1 version, and then there is protocol-specific writer threads.
The new set of tests that we have been doing, you know, in addition to the previous set of metrics, is accounting for these capabilities. Also based on the customer feedback, we have limited most of the tests to the 10 GB bandwidth, because that seems to be the most common deployment, even though we do see customers moving to 25 GBPS, both on-prem and in the cloud. Also based on some analysis of the live data on the electronic real time, we have increased our payload for this testing. Now, the image size is four times that of what it used to be, and the update is 130 byte.
Ways in which the RTC can be deployed. We have a bank of channel threads which feed in the messages from the publishers to a set of item threads. Item threads are the place where we do the caching, the conflation, the delay. And the writer threads is the one that is interacting with the end applications
That is where the last mile features such as compression, encryption, traffic management, buffering, all those things happen. This is something to remember as we do the performance analysis and also the deployment. This threading model really scales well, and based on your requirements, you can, you know, choose different types of threads, based on the compute and the bandwidth that is available. Next slide, please. I touched upon the need for using the larger messages. You know, that's because venues have added more fields and the granularity of the timestamps have increased, and also some of the fields have overgrown. We validated this with the live data captures. What we see is there is an impact for both compute and the bandwidth because of this reason.
If we have some previous results based on the 74 byte messages, we see that, you know, with the new message sizes, that would be about 70% throughput compared to the prior one. This set of slides will show you what is the cost of the bandwidth and what is the cost of the compute, so that you can still leverage some of the earlier published results. Next slide, please. Okay, one of the capabilities that the Refinitiv Workspace uses is dynamic field filtering. Jeff touched upon field filtering capability that's been there in the RTDS platform. That is a static field filtering, so that you define the set of fields that is available through a service, for all the users of that service.
The dynamic view is defined by the client application, so different applications can request different set of fields. That still reduces the amount of fields and the data that is needed based on the applications requirement. Some of the captures we have done leveraging the base template from Workspace, they use 55 fields. Shows that you get the three fields for a quote update and five for a trade update. That reduces the message size by a factor of three. Next slide, please. What this means, as you can see from these two slides, is that there is more work to do on the sending side. If you look at the WebSocket JSON2 V, that is with the common views. The throughput, there is a throughput impact on the sender, the ADS or the RTC.
On the right-hand side, you will see that we made some enhancements for our view processing. We see like a 50% boost when there is a full commonality. This is 100% commonality, a fan out of 100. If you are using Workspace, the version 3.6.1 gives the big boost in performance. Channel threads is another one that we introduced a while back. That's more of a simplification connection management aspect rather than the performance, as you will see at the bottom. You know, if you use the same number of codes and this, there's actually a small drop in the throughput, but we still think this is a better deployment because it helps with the backing upstream and also reduces the connections.
Providers may not, you know, have capability to take 10, 12 connections per user, sometimes based on the configuration. That's another aspect to consider there. The other one that I want to touch upon here is, we, in addition to the JSON2 over WebSocket, we also have the RWF, the binary format over WebSocket. That gives you the performance that is closer to the socket performance of the RWF. Next slide, please. This is just a quick comparison. The existing RTDS deployment with the multicast has an ADH and an ADS, as we all know, and the recommended architecture for the TCP-based solution is two layers of the RTC. One of the performance aspects to consider, an ADH just writes the message once on the backbone, and that is its throughput.
Whereas for an RTC, because of the TCP mesh, it has to write the message multiple times to the fan-out layer, depending on the interest at the distribution level. That's one aspect to consider. Next slide, please. One of the other questions that we keep hearing is, okay, the 100,000 item watch list that you use for the benchmark is good, but we have a need for using 1 million items or 2 million items. What is the impact when we have no commonality or when we have a 100% commonality? These set of slides are showing that there is really no impact for the performance because we have taken the hit with the 100K watch list itself with all the data going out of the processor L1, L2, L3 cache.
As you increase the item list, watch list in the cache, so the only impact is that you need more memory. Other than that, the fan-out and the throughput remains the same. That's all I had, and if there are questions I'm happy to answer. Back to you, Matt.
Thanks very much, Mahesh. Also, just as a quick aside for those that don't know, Mahesh has been with us for a long time, most recently third level support, but he's also taken on responsibility as a Development Manager now. Mahesh, I think you're now working 24 hour days rather than the 16 you were doing before.
Something like that. Yeah. No, that's all right.
Just wanted to say congratulations. Okay, we can move on to the next slide. Jeff, I don't think we're gonna run through these in any detail. Unless there's anything you wanted to pull out just for sake of time? Jeff, if you.
Yeah. Okay. Essentially, we don't have a lot of time. Couple things I think are important. If you can go on to the next slide, actually, I'm just gonna hit on Q1, the first three items here, right? A cloud-friendly licensing system. Let's face it, we all know and anybody who's attempting to migrate to the cloud or has already migrated to the cloud, we know our existing node-locked licenses with a static host name and IP do not work in cloud deployments. This is something we've been looking at. We had to push it out to Q1, but we're working on a floating license server, with a benefit that will basically allow you to take advantage of elasticity within the ephemeral environment within the cloud.
We're expecting to have that done by the end of this quarter. That's what we're looking at. The next item down, the JSON Web Token for authentication. Authentication and security have become top priorities for anyone moving to the cloud, and we're implementing a new level of authentication for RTO. I'm gonna let Luke talk about that, so that saves me a couple seconds here. The other item, the third item here I wanna talk about is open source operating system. You know, basically, there are strategic advantages provided by moving to open source. In this, in this case, it'd be both for on-prem customers and for customers migrating to the cloud. You know, there's a survey here that was done by the FinTech Open Source Foundation that was just published last month.
The link is there. That'll be available when you get the docs, the follow-up docs. It discussed the benefits of, you know, of open source applications. In this case, they're talking specifically about improvements in time to market and lowering the total cost of ownership by using open source products. It's something, you know, we've been asked about from time to time for another operating system. Ubuntu gives us a little bit more flexibility, which I'll get to when we discuss the DACS piece later, but that's the direction we're going to use to be able to qualify Ubuntu operating system at the end of Q1. That's it for me at the moment, Matt. We're trying to preserve some time here.
Thank you everyone, and if you have questions, please send me questions.
Thanks very much, Jeff.
Thank you.
Let's move on to the next slide. Just while we're doing that, I know it's quite a few questions have come in around the failback. We'll endeavor to get to those at the end. If not, maybe do a write-up, 'cause there's some really good points around kind of the level of detail that you guys need to know. Let's move on to the next section that Luke will be presenting. I'll hand the microphone over to Luke. Luke O'Sullivan, thank you.
Lovely. Thank you very much, Matt. Thanks, Jeff. My name's Luke O'Sullivan. I'm Product Manager for Real-Time – Optimized, and today I'm gonna be talking about Real-Time – Optimized and the enhancers we've got, enhancements we've got coming along and kind of how they specifically relate to the customer identity and access management, platform. It's a bit of a mouthful, called CIAM. Coming through for this year. Before we do that, see, we want to just touch on some highlights from last year for what we've got through for Real-Time – Optimized. Got support of consumer warm standby. We've launched two additional AWS regions for Real-Time – Optimized. We've got Tokyo and Frankfurt. That brings our global coverage now to six centers, all fully redundant with two availability zones there.
We've also launched a live life service for Real-Time – Optimized over our last mile offering Delivery Direct. What we're really here to talk about today is what we've got coming up for 2023 and the changes to the CIAM platform. If we go to the next slide, please. There was a presentation or a webinar in November last year by our colleague, Magdalena Pszczółka, for around about an hour going around the background for changes to CIAM, the reasons we're doing it and what it means for a variety of different Refinitiv products. I mean, the highlights are, you know, highly resilient, cloud-based platform, and then the seamless updates of security and product features.
If you weren't able to make that webinar, we've got a link for it here. It is really good. It goes into good detail about the background for it and the technical implications for various products as well. We don't have time today to talk about the technical side. What we actually wanna focus on are the kind of product features and enhancements that are coming alongside that. If we can go to the next slide, please. Along with the change to the CIAM platform, for Real-Time Optimized, we're actually moving the product or moving it in step with the launch of a Refinitiv platform administration tool. This is gonna give customers the ability to do a lot more themselves, which they currently can't. We've been taking feedback from clients for a while, right?
We're slow to get IDs ready. Support of simultaneous logins isn't as good as it could be. Customers don't have a lot of view of what's going on with their IDs once they use them. We're actually building that into the platform administration tool. From this, customers are gonna be able to create their own IDs. They're gonna be able to entitle them as well. We're looking forward to seeing the reduction in times it gets customers to get into production with them. We're reducing the complexity. I think with customers who are running large platforms that currently need lots and lots of IDs to support it, we're gonna reduce those down to a few IDs as possible.
In the service insight dashboard part that we're releasing as well, you get a view of the status of the service for Real-Time Optimized, as well as a view of what your IDs are doing kind of right now. We're looking to move to this model late Q1 this year. As I said, Magdalena's webinar goes into details around timelines for those as well. Please do watch it. What I'm gonna do next is give a walkthrough of the platform admin tool to give you an idea of the sort of speed it is to create an ID for yourself, and then a view from the service insight dashboard. I'll need to share my screen in a second. There we go. Right. Hopefully this is sharing. Ooh, not yet. There we go.
Hopefully, this is sharing the platform administration tool. First thing to mention is, there are a couple of cosmetic changes which are gonna come through as well before it goes into production, and I'll point those out when we get to them. This is the landing page for a customer. I'm using an Refinitiv account we've got set up for our technical specialists. What I'll do is talk through how to raise an ID. You do this through application management. Click through here. I've already set up one set of IDs for our team. When customers move in here, this part will be blank. First thing you need to do is to create an application.
An application being the actual app that is going to make a direct amount to Real-Time Optimized. If you've got an enterprise-wide platform with a lot of different apps there, you don't do this for all of those apps. You do it once for the app that's going to draw in the feed. You should always give it a name which is relevant for yourself. I'm gonna call this Webinar RTDS. There'll be a few certification questions to say, what is the application going to be used for? This is one of the things that's gonna be changed. We're kind of working on these, but you get the idea. It's. Do you have an enterprise-wide platform? Do you have an entitlement platform of your own? If so, what is it?
What kind of use cases are you using the real-time for? There will be full training materials that we provide here as well. This isn't a training session, just giving a quick overview. Once you've done this, you click Add Application. This is the worrying bit. There we go. We see now you've got a second application showing here, and it's underneath this application where you'll create your ID or IDs. The IDs will be kind of linked to what we call a service account. You can create multiple service accounts here. The reason being, if you do have RTDS as a platform, you might have a production environment, a UAT environment, a disaster recovery environment. So you can create, if you needed, multiple instances, or just one.
To do this, quite simple, click on Create Service Accounts. You would enter the name here, and we'll call it Webinar RTDS Production. This bit looks a bit stupid right now, because it's a dropdown with one entry. As Jeff mentioned earlier, we're introducing a second authentication system for Real-Time Optimized as well, which will be JWT. When that's available later this year, as a customer, you'll get the choice of having client secret, which is a password, or JWT. At the moment, it's password only. When you click on Add Service Account, get a success message, which is good. What you see here is a service ID, which is unique. The service account name that you give can be used. Multiple customers can give exactly the same service account name. It doesn't matter.
You'll get a unique service ID, and then you see the password. It's very important to make a copy of that password 'cause once you click Close, it goes away and you don't see it again. It's very easy to reset a password in this tool as well. You just go in here, click Reset and Show Password. There you go. You got the same service ID and a different password. That's kinda how quick it takes to create an ID. This ID can't do anything at the moment, though, because it hasn't got any permissions against it. The next part is, right, how do you get the data on the ID? You do this from Managed Licenses.
When you click on Manage Licenses, this will show you the available licenses that you've got agreements with from Refinitiv or third-party exchanges and specialist data providers that you can pick from. If you click on Manage License, you get two options here: General License and Real-Time License. General License will show every license for any platform-enabled products that you have. At the moment, from a real-time point of view, it's only Real-Time Optimized, but other products will come on board. What the platform team have done, have introduced a wizard for Real-Time Licenses, and again, as other products come on, there'll be wizards for those various products. We use Real-Time Licenses here. This prompts you to pick from at least three sections and potentially four. The first section is a stream ID.
This relates to the product you've bought. If Real-Time Optimized is the only product you've bought, you'll only see one stream ID. That's what we pick there. Watchlist licenses is basically what size watchlist have you taken. In this demo, because it's an internal account, there's a variety of different watchlists. There would only be what you signed for. That would appear in there. These next two parts are other parts where there's a cosmetic change. Everything's going to be moving to a drop-down method at the moment. Unfortunately, in the version I'm demoing here, you have to start typing. This would be the data license, such as your exchange traded instruments or over-the-counter type license. You'd add those. It's not very good with having to type there, unfortunately.
That's why we're changing it before it goes out into the field, as a drop-down. If you take any exchange data and specialist data, then that would appear in there. This is an in-house account, we've kinda got a specialist in-house exchange code that we use. But the, the principle is the same as it was for those other parts. You would then add those licenses, and we're hoping to see a green success bar. There we go. That's good. You go, these licenses are now pending assignment. It takes couple of minutes for those licenses to kind of assign through the platform part, and they're around 10, 15 minutes to get all the way through the system.
What that's taken, what, maybe five minutes or so to get through this part and then another 15 minutes or so to get through the system. Within about 20 minutes, we're expecting customers to have a usable ID for Real-Time Optimized. There we go. We can see those licenses are already assigned now. That's kind of one of the enhancements there about getting through the speed of production. As says, well, this ID can be used simultaneously, right? If you're running a resilient RTDS platform, you wouldn't need necessarily to have two IDs. You could just permission one ID once, and use it in both sides of your platform. It kind of reduces the risk we see at the moment where you have multiple IDs being entitled individually.
We kind of remove that risk as well. Next part we wanna look at very quickly is the service insight dashboard. This is where you see the health of the service and what your IDs are doing. There's a lot of white screen on the top half at the moment because we've got a full roadmap of features as well, but these are the features we'll be going into production with. It shows you the six regions that we're in and the status of them, and this is a production environment as well. This isn't a pre-prod environment or any CIAM data. This is live, what we have now. We have, as you know, hopefully, multiple endpoints for any sensor. Couple of customer managed and one Refinitiv managed, and then three tiers as well.
By clicking on here, you can open this up and see exactly what's going on. You can click on the map, it opens it there as well. If there was an incident, for instance, in customer managed one small tier for Frankfurt, this would be either orange or red in this part. In the map it would appear orange just to say something's up. It would only appear red if the entire center's down. You could click on there, say, "Okay, customer managed one, we've got a problem." It's not a big deal. I'm medium tier customer, you know, medium tier Refinitiv Managed. It gives them a lot better visibility of exactly what's going on. This part, this kind of top half of the screen is common for all customers.
The lower half with the service account status, that's where you would see exactly what's going on with your own IDs. We've got a couple of IDs that we've set up and connected to the environment as well, or an ID that we've set up. You can see it's making two simultaneous mounts, one with about 5,000 instruments, one with about one active instrument. Just for the benefit of those on smaller screens, I'll try and expand that, hopefully that's showing nicely. This just gives you the detail of what they're called. You can see this is a unique name we've given it. It means something to me, I know what's up. Here we've got the service ID, random string of letters and numbers. What we see under the watchlist part is capacity, is the watchlist that we would have assigned.
It's a 50,000 watchlist. We can see how much each mount is using, and then we see a percentage of the overall capacity of what you're at. You'll be able to see there if your usage is going up and up and up and getting towards kind of 100%. At the moment, customers are a bit blind to that. This gives you the ability to see kind of in real-time what's happening. We haven't got enough decimal points unfortunately, so it's rounding down. If you do have 1 instrument and you've got a 50,000 capacity, it shows as zero. But that's just the way that the rounding down has worked. Further to the right, it shows detail here on which set of infrastructure any ID is connected to.
We can see that we have one connection to customer managed one to customer managed two. One is in the large tier, because it's a WebSocket connection, so 50,000 instruments is a large WebSocket connection. One in the medium tier because it's a RSSL type connection. They're both, or one has gone to U.S. East one A side, and one to EU Central one B side. It kind of also shows the portability of IDs. You can use them in any of the available real-time optimized centers that you want. These are the features we're kind of going live with. What we've also got coming along is at the moment, you wanna see service alerts. Unfortunately, this is gonna take you to a My Refinitiv page, which shows active service alerts and maintenance alerts.
We're going to embed those into this page, so you don't have to jump off. Same with help and support. This is going to take you to Contact Us, kind of web form to raise a ticket. We're building a chat agent into this screen as well, where directly from the screen you can start talking to somebody. The good thing is, this page has a log of your account number, so if you raise a ticket through the chat agent here, you don't have to worry about scrabbling around for your account number anymore. It's already going to be embedded in the ticket. You've got easy access to the service IDs that you might be talking about through here as well. It should make for a much quicker, simpler process for raising tickets as well.
We're going to look to show historical maximum usage by day as a graph and also be able to dump that as a number of RICs. Over to, you know, over the last three weeks, what's my max RIC usage per day? That's quite important to customers. And embed product change notifications in here as well. That if you're looking at this screen, you log in, we've got a change notification, it will appear as a banner. You have to read it before you can shut the banner down, so it kind of avoids missing product change notifications. As we know, unfortunately, not everybody subscribes to them. In the interest of time, and when there, we've got many more features to come through for these.
Obviously we're very interested to hear your feedback on what else you would like us, to add. My contact details, are in the webinar, so please do get in touch. That concludes my demo. Thank you.
Great job. Thanks very much, Luke. A lot to pack into 20 minutes or so there. Appreciate there's probably a lot of questions, as Luke mentioned. We're gonna come up with a lot more material on that. There's gonna be tutorials, documentation, walkthroughs, separate sessions. There's a bunch of questions that popped in around that as well. We'll aim to get through a couple of those. Just before we do move to Tirthankar, I did just wanna address one of the questions that Chris put on just before it gets upvoted any more. Do I understand correctly, multicast is no longer the recommended architecture for RTDS? Categorically, I can say that is not the case. Apologies if that came over.
It might have been as we're talking around what we're doing in the cloud, purely unicast at the moment, as we've talked about before. For some customers, they're looking at simplifying their network infrastructure and actually looking at going unicast on-prem. We don't plan to drop any of that support. Indeed, it's critical to what we do internally as well as to many of our other customers. I just did wanna nip that in the bud before it got a bit of a head of steam on the Slido poll. Okay. I will now pass over to Tirthankar who's gonna talk around some API strategy updates with the help of a few others. Tirthankar over to you, please.
Thank you very much, Matt. My name is Tirthankar Bhaumik. I'm the product manager on the real-time APIs. As most of you will be aware, we are going to be end-of-lifing the legacy APIs, SFC, RFA, and support for Marketfeed and SSL on RTDS. The end-of-life notice for this will go out in March 2023.
However, RTDS, on the other hand, are also planning to roll out a rolling obsolescence program. I think it is very important for us to understand how these two end of life, the end of life on one hand, and the rolling obsolescence on the other hand will work together. To start this off, I'd like to invite Jeff to talk us through the RTDS rolling obsolescence first. Then I will overlay the Marketfeed SSL end of life on top of that. Jeff?
Sure. All right. Thank you. Some customers are aware of this because I've talked to some over the past six or eight months, whatever it's been. Basically what we're going to do, we're supporting many versions of software out there right now. We're announcing a rolling obsolescence plan to end support for older versions of RTDS. That will include ADH and ADS and ADS POP, and eventually will even include RTC, but not initially. The plan goes into effect the first of October of this year, and at that time, we'll only be supporting versions 3.5 and higher. To make it easier to keep track of what releases are supported, we'll offer an ongoing support for the current version and two previous versions of RTDS software or T minus two, as Matt likes to say.
It's probably easier, let me run through an example. We anticipate the next version of RTDS that we're gonna launch at the end of March will be 3.7. All right? At that time, you're gonna also receive a 6-month notice that the current and two previous versions will be supported as of October 1st. That means as of October 1st, we'll continue to offer support for version 3.5, 3.6, and then 3.7. Again, the rolling obsolescence plan will mean when we get into next year, October of next year, then we'll deprecate 3.5 and go with 3.6, 3.7, 3.8. All right? The objectives of the plan are really simple. We're gonna provide you a reasonable shelf life for RTDS software of approximately three years.
We'll position you to take advantage of newer features and newer capabilities, and it allows us to continue to innovate and add value to the platform by focusing on newer software releases than supporting older releases. That's it for now for me. If there's questions, obviously we can address those, but I'll pass this back to Tirthankar. Thank you.
Thank you very much, Jeff. Now I will overlay the end of life changes the legacy Marketfeed end of life changes on top of Jeff's rolling obsolescence plan. Next slide, please. Thank you. If we just run through the milestones sequentially. Milestone 1, Q1 2023, which is essentially going to be March 2023, we send out the end of life notice for Marketfeed, for support for Marketfeed and SSL on RTDS. End of support notification for the legacy APIs, RFA and SFC. There will be parts of LPC, which is the OMM to Marketfeed conversion that will be end of life as well.
We send out that notification March 2023, and then, so you will be given in that notification three years to migrate your applications from legacy APIs to our strategic APIs. That three years comes to an end in around Q1 2026, probably February 2026, which is milestone two. At milestone 2, February 2026, the legacy APIs, RFA 7.X and SFC will become unsupported. However, these APIs will continue to work because for another 30 months there will be a version of RTDS, there will always be a version of RTDS for the next 30 months that supports Marketfeed and SSL. At that milestone two, there will be three versions of RTDS, 3.7, 3.8, and 3.9 that support Marketfeed.
Timeline three, when we release RTDS 4.0, that no longer supports Marketfeed. You still have 3.8 and 3.9 supporting Marketfeed. Milestone four, RTDS 4.1 is released. We've got two versions of RTDS that don't support Marketfeed. 3.9 continues to support. At milestone five, 4.2 is released. We have three fully supported versions of RTDS that do not support Marketfeed. There is no version of RTDS that supports Marketfeed. At milestone five, t here is no version of RTDS that supports Marketfeed that milestone, your APIs, the legacy APIs will now stop working. The essential takeaway from this is this. You will initially be given three years to migrate.
If you haven't been able to complete your migration by then, you still have another 30 months to complete your migration. All in all, what you have is just under six years to complete your migration. It is very important we understand this because it may have ramifications for you. I think we have a polling question after this.
Yeah, we do. We'll leave this polling question open for a while. We're obviously not gonna read the results. It's gonna be down to each individual, but it'll allow you to understand what you want, what you wanna get out of this. Hopefully the last couple of slides and then when you have the presentation, you either understand what the implications are now and you're in control, or you might want to digest it or, no, actually, can you please contact me, which Tirthankar is very happy to kind of sit down one-on-one and walk through what those plans mean in a little bit more detail and the implications around that. We'll leave that poll to run in the background. We won't close it until the end of the webinar.
For those that have said, "No, please contact me," absolutely, yeah, we can do that. If I just have a quick look where we are at the moment, probably about 60% get the idea, 40% don't. Sounds like, Tirthankar you're gonna be on the phone a lot. Let's see how today digested the results. Should we move on to the next slide then, please, Tirthankar?
Yes, please. Thank you very much. I have a announcement to make. I'm very pleased to say that we have released our first real-time high-performance C# API. This is the Refinitiv Real-Time SDK. As you're all aware, Refinitiv Real-Time SDK consists of two APIs, the low-level enterprise transport API and the high-level EMA. This is the enterprise transport API C# on .NET Core. This API complements the existing offerings that we have in ETA, which is the ETAC and the ETA Java. The public interfaces will have a similar look and feel. There will be some language-specific or syntactical differences, but the interfaces will more or less look the same.
What we have released o ur first release consists of the consumer and non-interactive provider, and it has socket support. The interactive provider will be implemented next year, and once the API has hit the market based on market feedback, we may implement WebSocket support as well. This API at the moment is available in three locations. It's available to download on our Refinitiv developer portal. It will shortly be available on NuGet. NuGet is a package manager that enables developers to create, host, and consume libraries for .NET, and it provides tools for those things. It will be shortly available on NuGet. It is also available on GitHub. We are open, so the open source, the source code is available for you to inspect on GitHub. What coming
What is coming up during the year is a Refinitiv Academy session, where we'll be doing a deep technical deep dive into this API. I'm really looking forward to the EMA C#, the EMA release in October 2023. There is just one more thing I'd like to point out here is as a release. The release of this API does not mean that there is any imminent end of life to our legacy RFA.NET 8.X API. RFA.NET is a legacy API, if you are migrating away from it or if you're planning to migrate away from it, please go ahead because it is a legacy API, it will be end of life at some point in time. Whenever we end of life it, you will be given a two years notice.
There is no imminent risk of end of life to this API. Next slide, please. I'm just going to quickly run through some of the highlights of this roadmap back because I'm conscious of time. End of this month, January 2023, we release the API performance results and tuning guide, which essentially benchmarks API performance. I'll jump to point three. March 2023, we will be sending out the end of life notice for Marketfeed SSL and the legacy APIs, as I just alluded to. I'll jump to point five, where in October 2023, we will have the EMA C# release on .NET Core. Next slide, please. This is again a roadmap of the C# API.
I'm going to skip this because I'm conscious of time. Next slide, please. Since last year, end of lifing of legacy APIs and migration to strategic APIs has been a really big focus for us, and it will continue to be a focus for us in the forthcoming years until obviously we've got everybody migrated. When you think about your legacy APIs on the left and the strategic APIs on the right, the migration choices you have... The first one is the ETA, which is a low-level API, and it integrates with the operating system really well, providing you with the lowest latency. The ETA API is available in C, Java, and obviously now .NET. The EMA API, which is a high-level API, again low latency, but easier to work with.
The EMA API is available in C++, Java, and .NET in October 2023. You could obviously migrate to a WebSocket API, and you could use any of those frameworks that have been listed below, Perl, Python, Node.js, R, Ruby, so on and so forth. If you are thinking of migrating to the WebSocket Python API, you could also consider migrating to the Refinitiv Data Library for Python. This library is a high-level ease of use wrapper over WebSocket, and it implements administrative tasks such as login, authentication, and connection management. That is the ease of use aspect of this API. At this point in time, I'd like to hand over to my colleague, Olivier Davant, who will provide us more details about the Refinitiv Data Library. Thank you very much.
Thanks, Tirthankar. My name is Olivier Davant. I'm the product manager for Refinitiv Data Libraries. Today I would like to give you a five-minute overview of those libraries. Refinitiv Data Libraries can be viewed as a natural extension of Refinitiv Data Platform. They simplify the access to the platform. They provide ease of use data retrieval APIs. These libraries are available for Python but also for TypeScript and .NET Core. Today I'm going to focus on the Python version and more specifically on the real-time streaming features of this library. Those libraries are intended for low-performance scenarios. If your application needs high performance, you should consider using the Refinitiv Real-Time SDK instead. Also, Refinitiv Data Libraries can be used to retrieve non-streaming data via request response or bulk files. I will not show you that today.
They are available from September 2021. Next slide, please. One of the great feature of the Refinitiv Data Libraries is that they offer you a consistent way to access Refinitiv data regardless of the access point your application uses to connect to the Refinitiv Data Platform. For example, you can write an application that gets streaming data from the Refinitiv Data Platform, Real-Time Optimized data, for example. Very easily, you can switch your application to get the same kind of data from a Refinitiv real-time distribution system, RTDS. You could even switch to a Refinitiv Workspace. That is our desktop application.
The only thing you need to do to switch from one access point to another is either to change a configuration file or if you don't want to use the configuration files, is to change the initialization phase of the library in your code. Next slide please. The library was designed in a stack of layers to provide both ease of use and flexibility. The top layers give you a better usability, while the bottom layers give you more flexibility. Let's start describing those layers from the bottom. The session layer, that's lowest. This one manages your session with the platform. It takes care of the authentication, token management, connectivity, reconnection, these kind of things. On top of it, you have the delivery layer.
This layer is content-agnostic, and it manages the different delivery mechanisms provided by the Refinitiv Data Platform, meaning request response, streaming, bulk files, et cetera. On top of this delivery layer, that is quite low level for the Refinitiv Data Library, we built the content layer. That is this one content specific. It contains classes and objects representing financial items such as level one market data, news, historical pricing, etc. On top of this one layer, we built the access layer that brings even more ease of use by defining higher level interfaces that add value on top of the content layer. Of course, you can mix those different layers in your application depending on your needs. Next slide, please.
This is a very short code example just to show you how easy it is to use this Refinitiv Data Library for Python. Here we are using the access layer, so the highest layer of the library. And at the top of this code snippet you see are the open session. That's how you initialize the library. Here I'm just specifying the name of the access point I would like to use. In this case, it's Real-Time Distribution System. If I want to switch to another access point like RDP or Refinitiv Workspace, I just need to change the name of the open session parameter. This one relies actually on the configuration file. The second block of code is a callback that I define to retrieve incoming streaming data. I just display it in this callback.
The next block of code is a call that opens a pricing stream for a specific service, Elektron DD in this case, for universe of RICs and the list of fields. Here I indicate what callback I would like to be used by this stream. The stream is returned, and the data starts flowing in. You see, it's very, very simple. If you want to get more details about the streaming events like the updates or refreshes, the statuses, these kind of things, you can use the other layer. That is the content layer that give you more details and of course, more flexibility on the way you're consuming streaming data. The library in terms of streaming, they allow you to subscribe to level one's prices, subscription, market price.
Level two subscriptions, market by price, market by order. You can expand chains if you want with the library or even send contributions. Next slide please. This slide is just a summary of everything that I explained. Takeaways. If you want, you can come back to this slide, and you will find a good summary of the Refinitiv Data Libraries. Next slide please. If you want to learn more about those libraries, then you can refer to the learning materials available on the developer community, Refinitiv developer community. You will find here also links to the Q&A forum, to the GitHub repositories for the examples and the Python package index where you will find the library. Next slide please. I think that's the polling question. Yes.
That's the end of this very short presentation. Here is the next polling question. Thank you very much.
Thanks very much, Olivier. Hopefully for those that have been paying attention on the Slido, you'll see this question pop in. Olivier's done a great job there of just going through at a high level what this data library can do for you. Obviously there's a bunch of material that he's got linked in there. If you want to have a look and have a read through that, and then if you want to, what's your initial thoughts, really? That's what we're looking for. Would you like to use this Python version or the TypeScript Java version or the .NET version, or you're thinking about another programming language or technology? Or actually you're happy with the existing range of APIs and SDKs, so you don't think you'll need that.
We'll leave that poll open for a little while. I'll come back to it because we're already sort of up against time. I've already closed the other poll for the help around the API. Obsolescence. Seen there's some questions on that as well. I'm gonna try and speed through things 'cause there's some questions I think we should really try to get to before the end of this session, hopefully we can get that done within 90 minutes. What I'm gonna do now, we'll move straight across to the next section, which is gonna be around DACS. I'll hand over to Jeff and Ted.
Thank you, Matt. Almost forgot to mute there. Unmute. Got it. All right. Initially, if you can go to the next slide, please, I'm gonna run through the large enhancements on the roadmap for DACS version 7.9. That was actually, I know it says Q4. It was actually released first week of January. It just, she could have changed this, but it was close enough. Anyway, the first item there you'll see is the port of the DACS utilities to RSSL. You know, the benefit's really simple here. Basically, porting these last four utilities over removes any dependencies on legacy 32. The infrastructure and the legacy API is now replaced. We're going with the more strategic RT SDKs, the APIs, right?
So that's a big benefit that we've been working on. It's now finally complete. As soon as I finish this page, Ted's gonna run through some demos of the new features. Second line, GDPR requirement is that former employees sensitive data needs to be anonymized after a specific period of time. Even though the DAX ID itself may not be sensitive data, an employee's email address would be considered sensitive data. Like, an employee email address needs to be anonymized. There's not a lot of sensitive data that DAX maintains, but email would be one. The benefit of this feature is that it will automatically anonymize an employee's sensitive data, former employee's sensitive data, after the specified GDPR regulation.
One thing to keep in mind, GDPR is pretty much a European regulatory framework that you have to follow, obviously, if you're in Europe, but it also pertains to any of... If you're feeding, RTDS to users outside of Europe, then they would be required to be listed here under the GDPR as well. Just so you're aware of that. The next item, and again, I apologize for going fast. We're just running out of time. There's an upgrade to the DACS help system, and there are 16 docs released with every DACS release. It's unless you really know specifically what you're looking for, sometimes it can be very difficult and frustrating to determine what doc am I supposed to look up this product information in.
There's a search window that allows you to enter a keyword search that will search across all 16 docs and provide a list of these results. What we had to do, we had to upgrade the API behind that that's used for the interface, and we just want you to know that the UI beginning with 7.9 is gonna be somewhat different. What we're seeing is better performance, and there's actually newer features that Ted will also cover when he does a couple demos that will lead to an enhanced user experience. The last line here, just as within RTDS, every release has a number of small enhancements and bug fixes.
There's a What's New DACS 7.9 available on My Refinitiv that provides a succinct description on each of the large and small enhancements and bug fixes. I'm just gonna turn this over to Ted now. He's gonna do a couple demos for us and give us an update on the database. Thank you.
Thank you, Jeff . I'll share my screen now. Hopefully you can see my screen. What I'm going to show first is we're going to cover the highlights of 7.9, and then we're going to cover a little bit of 7.10 as a heads-up of what's going to be happening. The first one I wanted to cover is when you installed previous versions of DACS, the binaries were all 32-bit. People kind of knew you could go over to the 64-bit directory, and most of the binaries would be 64-bit. There were a few that weren't there. Now, however, when you install DACS, the default is, if you do, like, a file on it, you'll see everything is now defaulting to 64-bit, which is a good thing.
The second thing different in 7.9 that you'll see different is, and it shouldn't matter to anyone, but on the infrastructure load, I removed Red Hat 5. Shouldn't be surprised, Red Hat 5 has been dead for a long time, but to save space, I've actually removed that. The next one that's going to be important of 7.9, and what we're gonna do is, I'm gonna start a Map Collect. What we're gonna watch is this active RSSL mounts. Right now it's 0, and let's do the actual click. We started the collection. Let's go back to our adsmon. You see now Map Collect is now mounted to the ADS, but of course it uses RSSL. Why did I do this, right?
It's before, if people remember, I was on SSL, and besides SSL, I was using Marketfeed, and I was 32-bit. What are we now? We're now RSSL 64-bit using OMM RWF, right? I didn't wanna be the last one. You know, when the music stops for Marketfeed, I didn't wanna be the last one on it. DACS is now switched over. All the utilities that have been switched over, both Map Collect, Item Requirement, Perm Test, and Subscription Repair. Let's stop the Map Collect. That's we're not interested in that. The next one is the help system, right? Let's go over to the help system and let's take a look at the difference. The PDF hasn't really changed. Anything about it hasn't changed. Let's go over to the help. You're gonna see it looks a little bit different now.
Here's the new screen. Let's type in something into search. I always like doing Map Collect. Let's go see what it brings up. It has a nice little bar saying, hey, it's doing its search, and it's doing its search right now. Brings up with all the things that it found with Map Collect. Let's just click on this one. This happens to be the what's new for it. This is kind of what I just showed, which is, hey, you know, we switched it over to RSSL. You can see that in our what's new. The other interesting one we've added was if you click this little globe, it will do translation. Okay? Now what you could do is actually say, oh, I'm interested in a different language, you know, Japanese. Never tried it.
Let's actually go see. Hang on. Yep, let's go to French. I think I can say the globe. It'll actually then switch it to a different language on the display. You can see now it's actually translated using Google to a different language. That way, if you're more comfortable reading a different language, that's available now inside our help. Let me clear that. That's the important parts on 7.9. Let's talk about 7.10. This is more of a heads up so that nobody's surprised. The first one that we wanna talk about on it is what are we gonna do with Postgres? Postgres right now, as people know, we're on 13. We got quite a bit of time, but in the next version I'm going to be switching to 14.
We got a lot of time. Once we switch to 14, we've got, like, three and a half, 4 years on it. Why are we going to 14 and not 15? You may say, "Hey, I do see a 15 version. Why don't you do that?" I get a lot of emails on it, so I want to explain it today. The reason it has to do, five main reasons. First one is I am part of the developer group of Postgres, and right now it's got a, you know, where it's just on 15.1. It's not quite what I would say. You know, I'm always protective of DACS, making sure we use a very stable version. 14's very stable. 15 is still kinda is being worked on. The next one has to do with cloud providers.
I can't go to a Postgres version that a cloud provider doesn't support yet, right? What you'll notice in Amazon, 14's their latest one that they'll support. Same for Azure, 14 is the latest. I wanna make sure that we're going to cover this. Notice Flex Server. Okay? We're gonna cover why Flex Server. Next is GCP. GCP, again, their highest is 14 also. Again, not 15. Azure, I always mention to people, stay away from Postgres, what they call single server. The reason is, if you notice, they stopped at 11. If we go back to Postgres, here's 11. That's it. You've got until November of this year and then you're gonna be out of luck. That's why I say, people, don't even go down the 11 route. Stick to Flex Server.
The next one that's going to be important is Oracle. I'm going to kind of go through the steps one by one, that way you'll understand how, what we're going to be doing. Back in December of last year, here's the matrix. Currently in DACS, what we were supporting was, we were on client version 12.2, and that meant we could support Oracle database 21, 19, 18, 12.2, 12.1. However, in December, all these pink ones used to be green, now they're pink. Pink does mean, hey, you no longer can get premier support, primary contact support, no more extended support, no more maintenance support. That's it. No more fixes for it. That's a big heads up, which means, hey, I don't care about necessarily bugs, but I care about what about security issues, right? That's it.
They said no more of that. That's gone. What does that mean to DACS? What that means to DACS is I'm only gonna be supporting, you know, what Oracle says I can support, which is I'm going to be including 21c client version, which now supports the two databases that they support, which is 21c and 19c . How do we get to this client version? Let's go over to Client, Instant Client. This is the API we use to talk to the Oracle database. You'll notice it has a requirement of glibc 2.14. If we go to see what includes glibc 14, nope, not CentOS 6, so that's out of the box.
That means that's minimum glibc that I have to compile with is going to be for Red Hat 7, CentOS 7 and, you know, eventually we'll talk about Ubuntu. Ubuntu, it's gonna be a similar version. What does that mean now to people who are installing DACS? Means that the minimum version is now going to be Red Hat 7. You used to be able to get away with installing DACS on a six, so I used to support three OSs, six, seven, eight. Now it's going to be seven and eight, and then later this year, Jeff will talk about we're gonna be supporting Red Hat 9. Three OSs, but, you know, six had to be dropped off for this reason. Just last, Jeff talked about Ubuntu.
The main one I wanna talk about that is this is an Ubuntu install. The one we're mainly targeting is 2004. You can see I have DACS running on it. What we're gonna do is certify on that. Why do we wanna do that? We wanna do it for a number of reasons. Jeff will talk about it on, you know, it's better for containers and things like that, 'cause you don't have to have a subscription to Red Hat to run that container then. The other one is, and this is for the Chinese market, it had to do with, we can pick up Kylin, or Kylin.
The reason is Kylin Cloud is approved kind of by the Chinese government, so that way we could actually then say, "Hey, we can run DACS on that environment then." That's it. I've covered everything. I'll turn it back to Jeff.
Jeff, the old, mute button still on.
Thank you, Matt. Sorry about that. Thank you, Ted, for the great update and demos. That's good to know what's going on there. If we can move to the next slide, I'm not gonna spend a lot of time because we don't have a lot of time. All I'm gonna say is sneak peek at Q1 of 2023, you'll see the same two items that we have for RTDS. We're going to the cloud-friendly licensing system, which will be a floating license server. We're going to Ubuntu 20 and Ubuntu Kylin version 20, right? Again, which Ted just mentioned, gave you those details why we have customers that are looking for something other than Red Hat as an OS.
We don't expect all customers to move that direction. We do wanna be able to address those customers that are looking for something different than Red Hat. Also, as Ted mentioned, even though today there's not an issue with building containers with Red Hat, Red Hat as a OS in there's this potential that could become an issue. We just wanna have a backup plan here. Plan B would be Ubuntu for our containers, right? Again, you can run either or, but we're looking to have both.
Again, I said I'm only gonna do Q1, but I did wanna jump down to Q2 just again, as Ted mentioned, we're gonna move to Red Hat 7 binaries because we need that to do the migration to Oracle 19C and 21C, and that'll be available in Q2. That's it. Matt, you can take over. Thank you everyone. Again, if there's questions, please send me an email.
Thanks, Jeff. We were gonna do a poll question here, but we're actually gonna skip that. If you wanna skip through, we're gonna go straight down to speak to Vesna. The question was gonna be around what was the kinda next priority, whether Red Hat 9, Ubuntu, maybe CentOS or maybe the Kylin version of Ubuntu. There's a bunch of questions, and we'll have a couple of minutes at the end once Vesna has run us through her roadmap highlights. I'll hand the microphone over to Vesna. Thank you very much.
Thank you, Matt. I was going to go over some 2022 highlights, but I think I'm going to skip over to just 2023 H1 roadmap. Can you please go to the next slide? Just quickly, this is the 2022 highlights. I think we're gonna share this deck anyway, so if anyone has any questions on 2022 items, please don't hesitate to reach out to me. Next slide, please. I just wanna quickly go over the 2023 high level deliverables that we're looking to do for this year, at least the first half of this year. For Q1 we're releasing ELF framework upgrade and the Red Hat 8 with admin functionality for ATS. ELF stands for Element Library Framework.
These are components and tooling that are technology native to browsers, allowing the ATS user interface to work more efficiently and quickly with better performance. The UI will look a little bit different, but it will function the same exact way. We'll also have some documentation that will go over the functionality with the new look. We also have Red Hat 8 with admin libraries in Q1 of this year, which we were missing from our last year's Red Hat 8 release for ATS. We'll finally have the completed version of that by the end of Q1. Skipping over to Q2, we were releasing a SAML 2 enhancement and also ATS in Azure. For SAML 2, it was one of the highly demanded enhances that we have from clients.
It will be similar to SAML 2.0 DACS implementation, which will enhance authentication into the ATS user interface by issuing a SAML 2.0-based solution that will be able to connect into a corporate identity provider that is compatible. We're looking for end of Q2 for that one. Lastly, we also want to qualify ATS to run in Azure this year, which is similar to what we did for AWS at the beginning of last year. As always, we'll also have some smaller enhancements throughout the year in every release, which is also documented in each of the packages that we provide. All the documentation can be found on My Refinitiv. That's pretty much it. Back to you, Matt. Hopefully, we'll have some time for some questions.
Thanks very much, Vesna. That was a whirlwind. I appreciate the best to pull that together. We've got, I think, four minutes left. We might be able to go a little bit over, but I appreciate everyone's got other meetings and other activities to dive onto. I was just gonna go through a few of the questions. The great thing about Slido is you can kind of vote for the ones that are resonating. First of all, just probably to clarify on the RFA obsolescence, there's a couple of questions around what the scope of that was, whether it's Marketfeed or everything. Tirthankar do you wanna just maybe clarify that stance?
Yes, absolutely. I'll clarify the RSA-RFA question, and I'll also clarify the question John has asked about the UPA API. RFA, we've got two streams of RFA. One, we have RFA 7.X series, which is a 32-bit API that is available in Java and C++. That will be end of life in Q1 2026, and that is the end of life notice that will go out March 2023. RFA 7.X 32-bit available in Java and C++. There is another stream of RFA, which is 8.X, which is a 64-bit API that is available in Java, C++, and .NET, that will be fully supported for the next forthcoming five or six years. If you're on... I think Umit has asked a question about RFA.
Umit, if you're on RFA, there is no action for you to take. John also asked a question about the UPA. The UPA was end of lifed last year. However, the UPA was essentially taken and renamed to ETA a couple of years ago. If you are on UPA-C or UPA Java, all you need to do is migrate to ETA-C or ETA Java, and that is a very straightforward migration.
Thanks, Tirthankar. Vasavi, I'll come to you in a second. I just wanna go to Parivat first. There's two questions on the failback. Is it done blind or does the RTC check that preferred host is available? From Marco there was one, what about kind of thrashing where you've got network issues? You don't want it to kind of flip-flop between preferred and non. Is there any guardrails in for that? What's your response?
Sure. Yeah, for the first one, right, does the RTC check the preferred host is available before attempting failback? The answer is yes. Right, the RTC making sure that the preferred host is good. The RTC makes sure that it can log in and receive the source directory before actually failing back to the preferred host. For the other question, we don't really have a direct way to limit the switching, but I would recommend using the specific date and time for the failback mechanism. For example, you can set the time to 1:00 A.M. on Saturday, right? Then the failback can only happen at that specific date and time. I mean, maximum would be once a week. If the RTC is already connected to the preferred host at that time, right, nothing will happen.
That's kind of to avoid switching too often.
Okay. Thanks very much for that. There's a couple other questions. I did just want to go across to Vasavi. Based on what Luke was covering there around the kind of changes to CIAM, what is the impact at a very high level, knowing that we'll go into much more detail in other webinars? If you can just summarize.
Absolutely. There was a question from Eric Wong about the upcoming authentication changes. Will there be any required work on the customer side for existing connections that work today? The answer is, existing connections using the machine credentials will continue to work as you migrate your applications to use the new service accounts. You saw a demo from Luke about creating and managing those service accounts. Essentially you'll be specifying those credentials. Depending on the application, the implications are different. If you are using a Cascade or RTDS or RTC connecting to RTO, the answer would be to configure it to use the service credentials and use the new authentication mechanism.
If you are using RTSDK, this is a new interface, so you will have to recompile and specify those credentials. If you are using WebSocket applications, we have sample applications that show you exactly what the code changes are. With any legacy applications that are connecting in using LPC, the Legacy Protocol Converter, it's a matter of altering, again, the configuration to specify your new service account, and of course, using an LPC version that supports that. That's a very quick summary, but we will have additional information, of course, as time progresses.
Thank you, Vasavi. Yeah, that went... Look, if you didn't get it in 45 seconds there, you can see all of the other follow-up sessions we'll have specifically around that. We're gonna stop there because we're already 1 minute over. Thank you very much to all of our speakers, for all the work they've put in, presenting and preparing. Thanks to everyone that's spent the past 90 minutes with us. Hopefully it was useful. We'll continue to do these through the year, as well as the customer forums and other engagements. We'll share the material and we will answer all the questions that we have not got to. We'll send that out in a, in a document to accompany the materials.
Thank you very much for everyone's time, and we look forward to speaking to everyone again very soon. Thank you and goodbye.
Thank you, everyone.