Betatalks the podcast

28. Scaling, Azure Api Management & open source - with Tom Kerkhove

We kennen Tom, een voormalig Microsoft MVP en GitHub ster, vooral van zijn open source werk voor het onderhouden van KEDA, Promitor, Kubernetes Event Grid Bridge en Azure Deprecation. In deze aflevering hebben we het over zijn huidige baan bij Microsoft, waar hij werkt aan Azure Api Management. Daarnaast bespreken we al zijn andere activiteiten, waarvan de meeste te maken hebben met het opschalen van applicaties. We bespreken ook hoe een succesvol 'vrije tijd' project van één persoon kan uitgroeien tot een kritieke afhankelijkheid voor een heel bedrijf.

Over deze aflevering, en Tom in het bijzonder: je kunt @TomKerkhove vinden op Twitter of kijken waar hij mee bezig is:

Over Betatalks: bekijk onze video's en praat mee op ons Betatalks Discord kanaal


Episode transcription

00:00 - Introduction
01:09 - Friend of the day
02:41 - Explaining KEDA and API management
14:27 - What does it mean to be a GitHub star
16:38 - Explaining the Azure deprecation
25:20 - Totally random question
29:18 - What does Promitor do
40:00 - Closing

Introduction - 00:00

Oscar 
Hey there, welcome to Betatalks the podcast in which we talk to friends from the development community.

Rick 
I'm Rick.

Oscar 
And I am Oscar.

Rick 
Oscar, how have you been?

Oscar 
I've been great. And I'm actually looking forward to something.

Rick 
I think you, I might know what you're referring to. But I'm actually looking forward to something too, because there's this one nice colleague that I'm finally getting to do a new project with again. So that's a good thing.

Oscar 
Yeah, I caught up in your services for a project I'm doing at the moment. And it's, well, maybe the second time we actually were at a client together at the same time, normally, we're send to different places. But it is an interesting thing. It's an enterprisey environment where we need to lift and shift, ad plus applications. First lift and shift and then going into a modernized track. There are some critical deadlines for certain reasons. So it's quite impossible. So I'm looking forward to it

Rick 
Well, that sounds like a job for us, right?

Oscar 
Yeah, definitely.

Friend of the day - 01:09

Rick 
So who's our friend of the day?

Oscar 
Our friend of the day is Tom Kerkhove.

Rick 
Tom is a software engineer at Microsoft working on Azure API Management and is a CNC F ambassador since 2020. Previously, he worked for Code it as an Azure architect and containerization practice lead. He was recognized as a Microsoft Azure MVP, and advisor from 2014 to 2021, one of the first GitHub stars, you can find him around on GitHub, maintaining Keita pro meter Kubernetes event grid bridge, and Azure deprecation. Tom turns coffee into scalable and secure cloud systems, and writes about his adventures on blog.tomkerkhove.be Hi, Tom.

Oscar 
Welcome.

Tom 
Good morning. Thank you for having me.

Rick 
Well, it's good that you're here. Could you explain a little bit what your actual day to day job is? Because you're working on a lot of stuff, you do a lot of open source stuff, and you do a lot of work for API management. What does your typical day look like? Or is there no typical day for you?

Tom 
There's no typical day. But my main focus is working on the self hosted gateway for API Management, which is a containerized gateway. And then I also work mainly on KEDA next to that. The other projects I really do in my spare time, just because I love doing them. And, yeah, it's nice to do some things on the side, always. But time is always a problem. Who doesn't have that problem?

Explaining KEDA and API management - 02:41

Rick 
Yeah, I think we can all agree on that one. So for people listening who don't actually know, could you explain what API management and KEDA are all about?

Tom 
Yeah, so let's start with KEDA. So KEDA stands for Kubernetes event driven autoscaling, which is a cn CF incubation project. And it basically strives to make application auto scaling on Kubernetes. That simple, where you can basically scale any workload Kubernetes, native through deployments, or jobs, or your own custom resource definitions. And you just say, This is my cue when I have that many messages started scaling my workload. So it really aims at application auto scaling and not on the cluster. And this is important for, let's say, Azure Functions, where you put them on Kubernetes, but also just native Kubernetes applications. And then Azure API Management is an API management solution on Azure, which allows you to expose your API's to your customers in a managed way. And it has these powerful policies. So you can bring logic to the edge by deploying them on your gateway instead of having to write them yourself, for example, rate limiting, authentication, and all of these things, which make it really nice. If you have a lot of API's that you need to expose in a unified way.

Rick 
Well, that's actually, let's dive into API Management a bit more, because I heard you say, and I think I read that online as well, that API Management is a great way to provide your API's to your own customers. But I think some people might be missing that sometimes you're your own customer, right? So even if adding API Management in in front of your own API's that you use could still be largely beneficial.

Tom 
Yes. So before I joined Microsoft, I was a heavy API Management user myself. And I always introduced API management, if I could, because there's two things one is in one company there isn't at all, there is always more than one team. So you will indeed have to collaborate with other teams. And API Management is a great way to abstract teams away from each other, but have a unified way of exposing these API's. So each team becomes an internal customer of the other team basically. And then second is because you don't know where you will be in the next five years, it's always good to have a facade in front of your physical API so that if you move your API from, let's say, a web app to contain app, your customers will not even notice because there's API Management in front. And you can easily migrate the traffic to another destination, which is also very powerful.

Rick 
So it's actually an extra layer of decoupling between front ends and your API's.

Tom 
Yes.

Rick 
Yeah. And, of course, I know it's so much more because of all the policies that you can implement.

Tom 
Yeah. And there's also the interesting fact that you have the concept of users, groups subscriptions, which allow you to really have granular access, which team or customer can access which API or product, so that you can segregate that based on how much they pay, how much permissions they have, etc. And then you get all of these analytics on top of that. So that you understand customer A is using my API is that much Customer B is using it that much. So we can build them correspondingly. For example.

Oscar 
If you look all the features were going through, because we indeed, you can see rate limiting and different customers. There are multiple theories in API management, we know or I know at least the classic one that some people thought it was a bit expensive, always. But there are multiple tiers. And you're working on the one that you can host yourself.

Tom 
Yep. So we have developer skill, which allows you to play around with API management, and have all the features. But without this delays, and it's not allowed for production. But for production workloads, we have basic standard and premium. Based on the skill, you get more features, and also more throughput on the Gateway. This is what we call the dedicated skills. Next to that we also have the customer consumption skill, which is serverless offering and is a bit separate from all the rest, but it is a cheaper way. But with limited features. Now, the self hosted gateway that I'm working on is available in developer skill, and premium. So if you want to use that, that's where you have to be.

Oscar 
And like in cloud, it's fairly simple to me, you just click something and you pay. But if you host it yourself, is there a cloud component to it.

Tom 
So the way the San Jose gateway works is it's a connected solution, meaning the management and control plane is in Azure, but the self hosted gateway, you can deploy practically anywhere. The only caveat is that every now and then, or if you can, always, it should be able to connect to Azure, so that it can pull the latest configuration from the cloud. Otherwise, the gateway will still be running. But it will not be using the latest policies that you apply or expose the latest API's, etc., that you have. So in that fashion, it's a connected solution. And you're actually built for the amount of logical gateways that you create in the control plane. So imagine you want to deploy self hosted gateway in your factory, you will create one logical gateway on the control plane, you will basically say I want to expose these API's on the gateway. And then you can deploy it on your on premise environment. And if you're on one instance, or 1000, we don't actually care, it's the same price that you'll pay for this.

Rick 
And then, of course, the next question would be, you say if we if you run one or 1000, we don't care? Is there a way of auto scaling the number of instances that you run for yourself hosted agents? So for instance, if load increases, I could use KEDA to, to up it, or

Tom 
Actually the yes, there's two ways so either you scale on CPU and memory through the native Kubernetes Horizontal Pod Autoscaler. Or you can also use SCADA to do CPU and memory based scaling or you can use our HTTP add on, which allows you to scale based on requests per second. So it depends if you already use gate out the, well, my personal life Is use SCADA if you don't use SCADA, you can start with the HPA and see how that goes. If you need more robust scaling, you can then install KEDA as well.

Rick 
So you're actually working on, if you look at it this way on the entire suite of auto scaling your self hosted, yeah, management.

Tom 
Yeah. Or you want to make it simple, because that's a question we often get, like, how many instances do we need for this scenario, etc. And we can give a number but it always comes down to how big is your cluster? How big are the spikes, etcetera, etcetera. So, auto scaling is typically a good thing to do. But also make sure to put that maximum instance count so that you don't have misconfigured auto scaler burning a lot of money that is dangerous.

Rick 
Yeah and also never forget the scaling down roll, right?

Tom 
Yes.

Rick 
I think that's a mistake we all made once.

Tom 
Yeah,

Rick 
Somewhere in Azure.  Yeah. Definitely ballooned some services somewhere into something ridiculous. Okay, cool. How much because I'm really interested is because API management is really built for cloud. And I think it's around for quite many number of years now. Was it? What is what is the path to being able to host that yourself? Because if something is built for native cloud, was that already present on containers, or was that a rebuilt,

Tom 
This was done before I joined, but how long has API Management been around, I don't know from the top of my head, but it was written in .NET framework initially. So the self hosted gateway, had to be able to run on Linux. So that's why .NET core was required. So it's actually the start of a new gateway, and which is now also becoming part of the cloud. So we're doing a full migration of everything.

Oscar 
Okay, so because of your new requirements, there was a bottom of rebuild, and that is gonna be the new version.

Tom 
Yeah.

Oscar 
Okay. Interesting to hear some details about it.

Tom 
A couple of years ago, everything was cloud based. But if you have a look at Microsoft strategy, hybrid and on prem, and the anxious is so much more important. And that's why we're following because not only can you run the self hosted gateway on your factory, but now you can also run the gateway on your local laptop to do your inner loop work. And before you even push things to the cloud, you can also verify that the policy that I will deploy will work in production as well. And that's another big benefit that this enables.

Rick 
Yeah, it is actually, and then talking about the inner loop and the fact that you could work locally. The cool thing is that API management also automatically provisions a git repository with all the information in there that you can actually open up locally and make changes to and then push back into Git. But currently, the Git solution, as far as I have been able to find is a defaulted, Git solution that actually fits into your API management solution. But you don't have an option yet. I'm not sure if yet is the right word here, but there is not an option yet. To for instance, provision your own Git repo, or have it be on GitHub or on Azure DevOps. Is that something that's being looked at that, you know.

Tom 
That's actually something I don't, I don't know if we're planning on doing it. But we have been building a DevOps Toolkit, which allows you to automate things more as well, which can also help you there. But I don't know about the future plans for the Git approach.

Rick 
Yeah. Because currently, it's still kind of manual work. Since if you do something in the portal, then you have to explicitly save to the repo to make sure that the changes are in the in the Git.

Tom 
Yeah. And it's another solution, though. So if you use ARM templates, or bicep, then you can also use...

Oscar 
Please use bicep if you do anything there.

What does it mean to be a GitHub star - 14:27

Rick 
Yeah, I get that one. Switching gears a bit, you're a GitHub star.

Tom 
I used to be until I joined Microsoft.

Rick 
And you can't be a GitHub star any more then?

Tom 
No, because it's the same company in theory so they decided that we cannot renew GitHub stars that are Microsoft employees.

Rick 
Yeah, like you also were an MVP but no longer are because you're working for Microsoft.

Tom 
That is correct.

Rick 
But looking at your open source work, what does that actually mean that the fact that they have recognized you as being a GitHub star? I looked at your profile, and I saw a lot of green? Like a lot of green?

Oscar 
You get a lot of green. If I'm working on some Yamo for pipelines.

Tom 
That's for sure. That's for sure. Well, I mean, if you love doing something, is it your work? Or is it just something you love to do. And I think that's the case for me. And I love coding, I love building things and allow I love collaborating. And that's why I'm doing so much so much open source work. And I've been working with the DevOps Azure DevOps team, as an MVP as well giving input. And I wanted to do the same thing for GitHub. And that's what a GitHub star allows me to do is help them improve the product by giving them feedback. So I was very lucky to be one of the first GitHub stars and share my views on how they can improve the platform. And I think it's somewhat of a recognition of the open source work that I do. But if I look at the other people who are a GitHub star, then I really wonder, what am I doing it because maintainer of curl has been there as well. And another big open source project, so I'm really honored to have been part of that group.

Rick 
Yeah, I can imagine that one. So that's, that's a little bit of imposter syndrome kicking in there.

Tom 
Maybe, I don't know.

Explaining the Azure deprecation - 16:38

Rick 
But looking at what you what you are actually currently maintaining on GitHub, one thing that stands out is Azure application and I think, if people are working on a day to day basis with Azure, and they don't know, the Azure deprecation, I think they should subscribe because it gives you a lot of information about services that are ending, or are. Could you explain a bit?

Tom 
Yeah, so the problem that I had was, I use a lot of Azure services, and new things pop up, but also things go away. And I found it very hard to get an overview of what has been deprecated. How much time do I have? And what is the impact. So what I started doing is, as a, as a GitHub, as a heavy GitHub user, I created a GitHub repo, and I started tracking issues for every deprecation, writing down what the official announcement was, what the impact is, and how long you have. Actually, that has grown to an Azure deprecation dashboard, where every new deprecation that I see is automatically creating issues automatically tweeting about it. And recently, I also added monthly newsletters to which you can subscribe to get a summary for all deprecations. The problem, however, is that it's hard to find all deprecations so Azure update includes some, but some teams hide them in blog posts or in documentation pages. So I also bit rely on the community to help point me out new deprecations. And on the GitHub repo, you have the issues, but you also have a project where you can see the deprecations per year, you can filter based on service based on impact based on type of deprecation, because we have features in deprecated SDK versions, but also whole cloud regions that have been deprecated already. So that's a bit the idea. And you can also actually sign up to get event grid notifications if a new deprecation is announced. So that allows you to then automatically react to it create a ticket to your backlog, create your custom notification, whatever. And my idea with that was also that if you have one of these events happening, what you would then do is trigger the logic app, run an Azure Resource graph query to see okay, this deprecation is for App Service, show me all the App Service resources that I have. And then you can really take easy steps to get off of that deprecation, for example.

Oscar 
That is somehow amazing work to automate something like that and provide all the services next to each other. Is it by any chance already used by Microsoft itself within the teams?

Tom 
I don't know if anybody's using it, but I've been working with the deprecation team to improve things because my ultimate dream Is that I can deprecate the repo, there's just a website that you can go to with the dashboards and see what the deprecations are open, and how it impacts you. And for me, it's very similar to a live site where every deprecation creates a new page. And then per deprecation, you see all the steps of the deprecation meaning now, it's announced, now, we deprecate this now you cannot create new resources. This is how we are impacted, etc. So that's my ultimate goal. But it's I don't know if that's coming.

Oscar 
Yeah, well, it's time right that you that you need for that, and you're a busy man.

Rick 
Well, one thing that I think is actually pretty cool is, before you join Microsoft, I've seen you in quite a few talks with people from Microsoft, for instance, during MVP Summit, and then somehow the question always came back, will it support the vendor?

Tom 
And that was going to be, but I

Rick 
just looked at your deprecation notice, though, your deprecation service. And it actually, it supports event grid, right?

Tom 
Oh, of course.

Rick 
So you even implemented throwing events based on Azure deprecation?

Tom 
Yeah, but wouldn't it be nice if this is an Azure Platform event for every new deprecation? That would be amazing.

Rick 
I hear that there's this new.

Oscar 
This shouldn't be in there like that. But that's the thing, right? If you if you start building something out like that, you, you think this should already have a place. That's how a project like that starts, I can imagine.

Tom 
Yeah. And it's also a bit of a show and tell that I'm doing like, this is what we could do. Please let us do it. But the main problem that I've seen with deprecations, in general, is that people always see deprecations as a bad thing. People want to not acknowledge them, or hide them, because it feels like they made a mistake. But if you do deprecations, well, as a platform, I think that's really an advantage for the customers, because you help them through the process. So announcing them as one. But if you add that value on top with the dashboard, and these events, etc., I think you will only gain trust from your customers, because you're helping

Oscar 
You can put it timely in there. And you can actually help the customer into the next service that might provide them better, better benefit. Now they are in the dark until it's maybe too late. And they need to do their own research to see what event what kind of service isn't there for them.

Rick 
Yeah, although most of the time you get a lot of time from Microsoft, and from Azure to migrate away from whatever is being deprecated. Because, for instance, yesterday I knew it was announced, but I couldn't find it. So went looking. And then I saw that, for instance, the multi step web test in Application Insights is being deprecated. It's still running until 2024.

Tom 
Yes, I think the default is three years. So that's actually the second problem is Microsoft analysis them already, which is good. But then customers are typically not aware of them you have to have people actually carrying them out deprecations and subscribing to the Azure updates before they do things. And Microsoft, of course, sends out emails, but it sends them out to the subscription owners, and these kinds of things, but they don't really know much about it. And the customers need to gain that mindset to worry about the infrastructure that they run on. Make sure they are aware of the deprecations and start planning them ASAP to get off of them. Because what I've seen at customers is they always wait until the last second and then it's a high priority things are on fire because it's going away. While it has been announced years ago. This is a very funny thing.

Rick 
Very recognizable.

Oscar  
Hey, I can remember a story of the customer just went in, I think couple of weeks. And I was like oh, did you know that? And that was I think we had three months. Like in October. Azure, what was it cache? Like we had a distributed cache which was heavily used in the beginning. That's going away and like everyone like what does that mean? Well, this won't scale anymore. Like if you want to run on one instance it's fine but there was there was a pretty big shock when we saw because we needed to, to move out what was it red this or something, but there was some rebuild here and there.

Rick 
But how cool would it be if Azure Advisor automatically tells you, hey, you're using this service, it will be deprecated by even if it's 2024, look into migrating. That would be, I think the next step for deprecation within Azure.

Totally random question - 25:20

Oscar 
Yeah I think so too, Rick do you know what time it is?

Rick 
Is it time for totally random question?

Oscar 
It is time for a totally random question. Tom, what is the one thing that movies never get right?

Tom 
Oh, boy. I don't know. You tell me.

Oscar 
I think IP addresses.

Rick 
That's, I always think about the net with Sandra Bullock, which is a very old movie. So that shows my age a bit. But they're they have IP addresses above 255 or so. And probably because they don't want to have a public IP address displayed because I think it was in squid game that there was this phone number somewhere on a card. And then people started calling it then it was an actual phone number. So that was an issue. The person I think actually had to change numbers because they were getting 1000s of calls per day.

Oscar 
I think the IP address is a deliberate thing that they...

Rick 
Yeah, it might be. But then if you have a show, like I think it was CSI tech, or I'm not really sure what it was called like. But that was actually a CSI that was actually completely built around. We are these tech people. And then still Oh, I think I got one. Yeah, it's the enhancement of the images.

Oscar 
Oh, yeah. And then...

Rick 
Where you have three pixels, and then they create that into this high definition image. I think that's something that's still

Oscar 
Well, with AI, we're doing stuff like that now. Right. But it's just guessing. Right?

Rick 
Yeah. I don't think it will hold up in court yet.

Tom 
But also, why do the technical IT guys or girls, which are still underrepresented, but why are they always the typical nerds are not just regular human beings like you and me?

Oscar 
Well...

Tom 
Always those stereo types

Rick 
He was talking to me. Yeah, I think I agree on that one as well. I mean, there's always if they need to picture somebody like that there's always the nerd with the trousers up too high. And just too old glasses on. And... Yeah, it's, I think we are meant to be depicted that way.

Oscar 
I think it's people expect to see right. But indeed, it's also sustaining that that image of IT. While we have plenty of work, we're so we're open for a lot of people to come in. But

Rick 
Yeah, and like you said, Tom, I agree. female workers in our field are still underrepresented. But also, and I think we talked about this with, with Jeff Fritz on our previous episode, it seems to be cultural, I'm not really sure how it is in Belgium, because I think we're pretty close. So probably, it's the same there as it is over here. But we also have a daughter company in Serbia. And there's a lot more women there working in tech, and it's accepted that it's normal. And still somehow, women in tech over here in the Netherlands or in Belgium, it feels like people aren't as accepting of that as is that the same over there in Belgium?

Tom 
I think it's starting to grow, but still very, very few female colleagues. Well, I actually would that I mean, doing development, which is a bit sad. And I think the perception on movies and series, etc., are not really helping there. But we do welcome them. So I'm not sure how we can improve it more. To me at most.

Rick 
No, it's a tough nut to crack.

What does Promitor do - 29:18

Oscar 
Tom, question, because I'm looking further through your profile, and you're doing so many things. Can you can you elaborate a bit on? Because I see you tweeting sometimes about promitor, what does that do? Exactly.

Tom 
And that's a good example of, hey, let me do this nice sample. And it ends up being an open source project that you may take four years, four to five years ago, I had a customer using cloud services, Azure cloud services that is, which was one of the first Azure services and it was one of the nicest services in my opinion. as well to do background workers, it is still around today. But at that time, it started to lose investment in features. It didn't support arm, etc. So the customer was a bit worried to use that going forward. So they decided, hey, let's use this new technology called Kubernetes. They started deploying their workloads on Kubernetes. And then they said, how do we auto scale this? And I'm like, you can't, you can only scale on CPU and memory, but you cannot scale on service message count. So at that point in time, the only approach was to use Google Stackdriver or Prometheus. So what promiter does is I created a container, which basically scrapes Azure Monitor metrics every five minutes, for example. And it makes them available to Prometheus so that they could scale on Prometheus metrics. So fast forward to today promiter has evolved to supporting Prometheus, that's the Atlassian status page with a variety of scrapers for Azure Monitor. And you can also now dynamically discover the resources in your answer subscription through Azure Resource graph. And it's being used by big companies like TomTom, Adobe. And we also have the nice Dutch friends, which is our. No, Albert Heijn, sorry. And Ahold Delhaize using it to do that. So it's basically bringing Azure Monitor metrics anywhere.

Oscar 
But this was born out of a project you were doing. And what I now hostage of this project, because these big companies now using.

Tom 
So I actually started this as a POC, could I make this happen for them? And I then said, Hey, you can use Promitor, you need to install Prometheus, you need to maintain all of that, just to simply auto scale your workload. And then they said, Well, that's too much. We'll just scale it manually for now. So they didn't actually end up using it. But I started doing more development on it. And then yeah, it's, it's become a bit big, certainly for the time that I have, but I still maintain it. And I closely work with the Azure Monitor team to improve it.

Rick 
Yeah, because I hear you, I hear you saying that you scrape Azure Monitor. Azure Monitor actually stores its data into different types of storages. So it has the metrics and the logs. Are there currently API's available that you could talk to talk to those storages?

Tom 
Yes. And I'm a fan of hitting the rate limiting on them. Because the scale of these customers, they it's really ridiculous. If I hear how many resources and how many subscriptions they have, it's yeah, it's pushing the limits. Which is hard to work around because people want to scrape more metrics. But yeah, I'm just blocked because of the rate limiting.

Rick 
Yeah, I can totally imagine. And then there's quite a few native integrations with Azure Monitor towards other services. And then are you in the end working to get this to be one of the integrations as well?

Tom 
Yes, and no, but in certain scenarios, it's not something you could integrate. Because imagine it runs on a Kubernetes cluster on pram, where they use Prometheus. The only way to integrate this would be maybe through Azure monitor for containers, but it's not a guarantee that they rely on it. And that's the tricky thing. I can not assume they use Azure arc, for example. So that's why it's still standalone at the moment.

Rick 
Yeah, but I think it's a nice project. So people using Prometheus and wanting to get their data in there, I see that they can even get listed on the GitHub page on the readme to make sure that people are aware of everybody who might be using this in production. And then you actually have quite a nice list of companies there.

Tom 
Yeah. And that's, if I would say I have one big struggle in open source. It's exactly what you just mentioned, understanding who is using your open source project and enlisting them. I have that struggle with Promitor have that struggle with Keita because yeah, you don't have any telemetry in there. Well, at least, you shouldn't have too much telemetry in there. But it's really hard. I mean, If you have a look at the image polls, there's a lot of them, but I have no idea who is doing it.

Rick 
Yeah, that's downside, I think for multiple things because I'm not even close to doing open source like you are. But I have this this small package that I created. And that is put it out there as a nougat package just so that people could use it if they want to. And I use it myself as well. And then in one month, I see 2000 downloads, and I'm not really sure where that's coming from, like is that an Azure pipelines be downloading it for every build? Or I mean, who's downloading the package?

Tom 
Yeah, that's a struggle. Now, there's some options there. And I did a presentation on GitHub universe on this. But GitHub itself can help there. Certainly, if it's a nougat package, then GitHub insights helps you show who's using it on public repos. And then I also started using another tool, which is called scarf. And if you pull an image through containers.romitor.io then it basically goes through a proxy and counts the downloads. And based on the IP, it also gives some information like it's coming from the US, etc. So then at least I know where it's coming from. But I don't know any more information than that. And that's a bit controversial as well, because it's a bit of tracking, but it also helps me understand. Yeah, what's the growth? Where is it coming from? These kind of things. Because in the end I'm not interested, really, for the names, but mainly for: Okay, I want to do this, I want to have some feedback from customers. But if you don't know who they are, then you can't ask them. Yeah, yeah, the only thing you will see is bug reports.

Rick 
But that's actually one thing that that you, you triggered. That's the thing with open source, right? I mean, are you are now how many people are maintaining Promitor, because there are now companies relying on this open source solution that you initiated at least.

Tom 
I don't think we have time for this conversation. I can go on and on. I'm the sole maintainer, I very infrequently have other people contributing. But that's a big problem in our industry, which is the supply chain risk. It's both for KEDA and for Promitor and other tools, where not enough people contribute back nor sponsor, the projects that they depend on. And that's my, I'm a big advocate for GitHub sponsors, not for myself, but for all these other technologies that we rely on. Because the risk is that if you don't support them through contributions and or sponsorship, in the end, the maintainers will be burned out. And the software that you rely on will be gone, which will be more expensive to migrate off of that and find an alternative than just supporting these people. And the .NET community had a big, big fire when I identity server said, this is no longer sustainable for us as an open source project without sponsorship. So we'll make it a paid product. If your company is big, and in my opinion, they did the right thing. Because if you don't pay for your identity solution, and you just use it for free, then I think you have a big risk. Because if there's one thing that is important, then it's a security and identity. And they did the right call there. Yeah, I totally agree.

Oscar 
Yeah, I do think this is also bigger conversation but...

Rick 
We can have that later on.

Oscar 
Yeah, we can definitely have Tom on again, because maybe people that are heavily in open source because this is this is a topic that is pretty serious. And I see it also on the consumption side and companies I work with, they're underestimating the dependency and the risk by not thinking about this.

Tom 
I'm lucky enough to not do this as a day to day job. And I know other people who rely on open source and donations. And for them, it's a big problem. But yeah, you also have companies like Microsoft who has who have a fast fund. And every month they donate X amount of money to a given project. So I think at Microsoft, it is 10k that they donate, which is it's something but more companies have to do it.

Closing - 40:00

Rick 
I agree on that one. Tom, thank you for being here. Is there anything that you would like to get back to?

Tom 
No, thank you very much for having me. It was a pleasure. And talk to you later.

Rick 
Likewise. Thanks.

Oscar 
Thanks for being our guest.

Rick 
Thank you for listening to Betatalks the podcast, we publish a new episode every two weeks.

Oscar 
You can find us on all the major streaming platforms like Spotify and iTunes.

Rick 
See you next time.

Oscar 
Bye.





Terug naar het overzicht van alle podcasts