How to Troubleshoot Like a Pro with Don Jones

Episode #1 Published Tuesday, June 23, 2020

This week, I'm talking with Don Jones about troubleshooting. Troubleshooting is a complex topic and Don wrote a book about it: How to Find a Wolf in Siberia.

Don Jones has been in the IT industry since the mid-1990s, and in that time has written dozens of technology books. These days, he's shifted focus to books on business, self-improvement, and even fiction, along with books that are accessible to a wider, non-tech audience. You can find the whole collection at donjones.com/books, including top sellers like Be the Master, Let's Talk Business, and How to Find a Wolf in Siberia. 

Show resources:

Full transcript:

Barry Luijbregts  0:17 

Welcome to another developer weekly. This week, I'm talking with Don Jones about troubleshooting. Don Jones has been in the IT industry since the mid 90s. And in that time, has written dozens of technology books. These days, he shifted focus to books on business, self improvement, and even fiction, along with books that are accessible to a wider non tech audience. You can find the whole collection at dawn jones.com slash books, including top sellers, like be the monster. Let's talk business and how to find a wolf in Siberia. Welcome to the show, Don. Thanks for having me. Good to be here. Yeah, thank you very much for taking the time. As I know, you're a busy man. So today, I want to talk with you about troubleshooting. So, I followed you online for a while. I like your content. And I've met you a couple times at pro site events. But I've only recently just discovered that you actually have some books. And I've read a couple of them, including how to find a wolf in Siberia about troubleshooting. So this book is short, and it's practical, and I liked it. Can you tell me what it is and why you wrote it?

Don Jones  1:32 

Yeah, so I guess the why first I, I kept running into people who, you know, said I wish I had better troubleshooting skills or someone needs to do a course on how to do better troubleshooting. And I thought, you know, there's got to be something like that. I mean, that the tech industry is feels like it's 98% troubleshooting half the time. And there is a website and there's a fellow who's who's written some books and stuff, and I dug into it because if somebody's already done something, I'd rather recommend that but I think kind of struggled to get through it. And I thought it could have been more of a story, story based thing, just like help people get context. So we started really thinking about it. I talked to a lot of folks to get some different perspectives. And I remembered way back in the day when I was speaking for the experts conference in Arizona, the the CTO of net Pro, which ran the event at the time, did this little talk and he said, You know, there's this phrase, how do you find a wolf in Siberia? Well, you build a wolf proof fence. And he kind of expanded on that. And it made such a good high level analogy that it's stuck with me. It took me a long time actually, when I wrote the book, I did not know where that originated. It originally came from, how do you find a wolf in Alaska, but Siberia is bigger. So it sounds more impressive. And it was such a good analogy that that I wanted to kind of make that the theme for the book and I guess the book is is not about it's obviously I'm not going to make you an expert and everything. But the kind of the point of the book is that you don't have to be an expert in things. There's a process and a methodology. And if you follow that, and if you really stick with it, and you follow it religiously, every single time you need to troubleshoot something, it will get you through now, but it doesn't mean you're not going to have to learn some things, right? If If something's broken, that you don't know anything about that part of the process, is identifying the minimum amount you need to actually solve the problem. So it tries to take troubleshooting from being this this huge vague, this thing is broken. I don't know what to do, and giving you a place in your mind to always start from

Barry Luijbregts  3:45 

right, yeah, because troubleshooting can be very daunting at times. Yeah, I'm, I'm a developer. And I think most of my time actually, I spent on troubleshooting. You know, I typed some text into Visual Studio or something. Then I press f5 and my application starts running and it doesn't do what I want. And so I start troubleshooting, why that is I fix it, and I do it all over again. So even in the development cycle when it's not even in production, I I'm troubleshooting, maybe 60 70% of the time.

Don Jones  4:16 

Yeah. And that's really common for for most trades, honestly, I think developers have a slightly easier in that, you can see your entire system in front of you. Like if you want to take the time, you can walk through line by line and figure things out. But I mean, any modern developer, you're also using these enormous libraries that you didn't write, and they're kind of black boxes. And if you're not interacting with it, or if maybe it's broken, then those troubleshooting skills, you know, really start to come into play.

Barry Luijbregts  4:50 

Yeah, absolutely. So, as a developer, you know, I think I'm a pretty good troubleshooter. And I think developers are good troubleshooters. But in the book, You say that there aren't aren't any natural troubleshooters. As in, there aren't people that are born troubleshooters? Nope. Why do you think that is?

Don Jones  5:09 

Well, part of it is because the human brain evolved around a correlation causation model. You know, we, we see something happen, and we look at what happened around it. And we try to assume that that's cause. And so when we troubleshoot things, we tend to try and rely on that model. It means if we've got no correlations, then it's sometimes difficult for us to figure out cause. Another thing we do, kind of as a default, is okay, if it was working yesterday, but it's not working today, then something must have changed. And so all I have to do is figure out what changed. And that's that's nominally true. But going into figuring out what changed can be hard. I mean, depending on what type of system you're working with, you know, if you're, if you're troubleshooting something in your car, Like, what changed? How do you actually go figure out what change? And because that's such a default condition for people's brains to fall into, instead of troubleshooting the problem, we go off on this tangent of trying to inventory all the things that changed. And that's not always super helpful. Like, look, sometimes that's a valid troubleshooting step to be sure, but it can't be your first step. And I think it's it's one of the reasons is that science itself, like the idea of stating a hypothesis, and then setting out to prove that hypothesis, and making sure that the hypothesis is disprovable in the first place. Those are not natural things that that come to us. And so it tends to mean we're not natural troubleshooters. Because troubleshooting really does have to be kind of a scientific process. You know, you'll you'll see people all the time. Maybe an outlet in the house isn't working. And so they'll run to the circuit breaker panel and just start flipping switches on and off. You're like, well, that's That's not troubleshooting. You're, you're just throwing spaghetti at the wall. Like, let's put some science into this. I posit that this circuit breaker is the problem. And here's my reasoning for that. And so I'm going to conduct a test to see if my reasoning is correct. And if it isn't, then that also needs to lead me in another direction. So to take the wolf analogy, if you told someone go find a wolf in Siberia, they'd maybe fly up a helicopter and start doing a search grid of some kind, right? And that's kind of that random idea of just throwing every circuit breaker to see if that was the problem. It's not actually troubleshooting the answer to how do you find a wolf in Siberia, you build a wolf proof fence down half of the country. And you then eliminate one side or the other. It doesn't matter which you eliminate first. But by eliminating that you are not only removing a potential location for the wolf, you are directing yourself to a potential location for the wolf Right. So you're, you're creating a positive outcome one way or another. It's It's like when a lot of people who aren't super familiar with networking will try to troubleshoot, you know, Oh, I can't reach the internet. Oh, it you know, it must be the router. I'll unplug the router and plug it back in. Okay, well, if it wasn't the router, you're right back where you began, that test did not put you on one side of the fence or the other. It just randomly eliminated one of many possible conditions. And now you don't necessarily have a next step. So that's what troubleshooting was really all about, for me is a process by which you're testing things, and the outcome leads you in one direction or another. It doesn't just leave you standing back where you started.

Barry Luijbregts  8:44 

Right. So this all that sounds like a very comprehensive approach. So let me just try to see if I understand this, because I heard a couple of troubleshooting techniques here. For instance, seeing what's changed can be a step in the troubleshooting technique, right? Yeah, absolutely. It's as a developer, it's something I reach for very quickly, because something must have changed in the code or something, right? If something breaks, that's my, my initial response with might not be a good to troubleshooting response.

Don Jones  9:19 

 So, yeah, a lot of developers do this. I think they do. And it actually isn't a great troubleshooting response. It can seem that way. Because as a developer, in theory, you have control over your whole system. But let's take a really simple classic example of someone who's constructing a SQL query. And they're, they're maybe not doing the right thing. And they're, they're, they're assembling strings together. And they are now working with a maybe maybe they've gotten past development and they moved into production. And now things have broken. Well, okay, what what changed? As far as your code is concerned, maybe nothing changed. But maybe now your test data set or your production data set has a last name like oh dual that has a single quotation mark. And because you were building the queries the way you were, then you broke it. So your, your problem was always there, you just didn't have a broad enough test set to detect it in your code when you're in development. And so asking yourself, What changed? I mean, the only thing that changed is you move from a dev environment into a production environment. That's a lot of things that potentially changed. And so if you just start running down the list of what changed, you're not actually going to solve the problem.

Barry Luijbregts  10:28 

Exactly. And that's, that's the the limited mindset that I find myself a lot in as a developer, because the world in which my application runs is very big with a lot of variables and parameters that that can change.

Don Jones  10:42 

Yeah, yeah. And so you really have to take the approach of, okay, where could the problem be? And what can I do to definitively eliminate at least one possible choice? And right, you know, that's where you you get into debugging And you know, in the case of code, you might set some breakpoints and set some monitors, so that you can actually see the data, you know, you might, you might construct your SQL query into a variable so that you can sit there and read that thing. And that's valid. And a lot of developers will do that very naturally. And I think they don't give themselves credit. Because you are actually by doing that simply by putting the query into a variable and setting a watch on it, you are implicitly declaring a theory, my theory is that something in the query is messed up. And so I am going to conduct a test that will eliminate or confirm that. So you set the watch, you could either then just look at the query visually, or what a lot of people will do is copy it to the clipboard and then go into a tool where they can run just that query with some hard coded values in it. And and they're therefore sorry, therefore, they are eliminating or confirming their theory. And if you found out that, oh, I never factored for that particular data type, then you have confirmed your theory that that's where the problem was. But regardless, you're sending yourself in a direction. If the query ran standalone outside of your app, then you know that it's not the query. It's it's the next line of code or the next line that's doing it. So it gives you a direction to go next.

Barry Luijbregts  12:21 

And that's kind of the building the wolf proof fence, right? Yes, in you're splitting up the application or the problem domain into smaller and smaller pieces until you finally have the piece that it must be in. And there you can then define your thesis of what it could be, and disprove or prove it.

Don Jones  12:42 

Yeah, it's funny. It's tough to do now because our applications are so big and so complex. And even if you're breaking something down into microservices, there's just so many things. But some of my first programming experience was in the RPG language on an es 400. And it was right relatively common to create domain gates between chunks of code. So I worked for a retailer. And we had a distribution center and one of our most complicated sets of code was designed to look at the physical dimensions of the products we had to ship, and then figure out how many of them could go into a given carton to be handed off to ups. So a lot of very complicated math and geometry going on. And that was broken into about eight or nine domain gates, so that you could take any one domain, it's almost like writing a function where you know exactly what data is going in. And data can only get out in a certain way. And so you would you could validate each of those individually it for RPG, it was kind of a precursor to what anybody today which is called good modular programming. But by doing that by really breaking things down as you code into the smallest human possible pieces, you eliminate the number of times places that you might have to declare a theory. So you've got these these strong walls and scopes between all these different data domains, and functional domains. So when something does break, you can declare a theory and easily build your wolf proof fence around that because by nature of modular programming, it's already a wolf proof fence. It's one of the reasons, you know, experienced developers hate it when people use global scopes and things like that, because now your fence has to be so much larger. And if you could, you could, you know, ratchet that down to a smaller area. So there are some design techniques, especially encoding that can make troubleshooting easier, but it really does get down to that, break it down into the smallest bit possible and then verify that bit.

Barry Luijbregts  14:43 

Yeah, I like this analogy a lot to the wolf proof fence because it applies so much to what we do as developers as well. You know, sometimes when I'm really stuck with a bug or an issue in code, then I just start to and I know that it's in in One class, for instance, I know it's somewhere there, or in one page, I just start to comment out stuff, just comment out code and see, well, was this it? No. Was this it? No, yeah. Was this it? No, well, then it must be in this piece right here, what it is, how much of this before it breaks? Exactly, exactly. Or just take it all out, and then create the most minimal possible application that can run that particular piece of code, and see if it still works within that different environment of clean application without all the plumbing around it. It sometimes works for me, too.

Don Jones  15:36 

Yeah. And I think a lot of people, you know, if you extend that technique to a design level, then it's possible to run a page with these 20 lines commented out, then maybe they should be a standalone function somewhere else. And that way, exactly. That way you can bring them down and write yourself a little test harness to just call that one function. And that's that's also really where unit testing became such a strong thing is because it helps you build a wall around a certain section of code and verify that that code is doing exactly what you thought it was. And it's almost kind of an automated pre troubleshooting step those unit tests.

Barry Luijbregts  16:17 

Right? Yeah, absolutely. You know, I really don't like creating unit tests. Anybody else? Know, but it's just it's part of the deal. You know, you just need to do it. Because if you don't, I have had so many times where I didn't write a unit test, and I change something somewhere else in the application unrelated and then something broke something that was working. Yeah. And when exactly I would have if I would have written a unit test for that. I would have seen that it broken and I would have changed my behavior and my code. Yep. Okay, so, we have a technique that is finding out what changed can be a technique in some cases. is obviously building the wolf proof fence, and thereby eliminating certain parts of the problem domain, which is a very powerful one. And then we were talking about devising theories of what could be wrong. And now, in the book, you've talked about the scientific method. Yeah. Which I think this is about. Can you elaborate what that actually is the scientific method?

Don Jones  17:24 

Yes. So the scientific method is, is an idea that you state a theory for something that you think might be true. And then you create experiments to definitively prove or disprove that and that's, that's where a lot of people get messed up. It's not if I run this test, it will tell me if it works. If you run the test, it needs to tell you it works, or it does not work like it has to confirm one or the other. It can't just be open ended. I have a big one. lamp in my office and one bulb and it was flickering uncontrollably. So I don't know is this a loose wire in the lamp itself? That was one theory is the bulb itself just going bad? That's another theory. I can unscrew the bulb and I can put a new one in there. And it depends on how you state your theory right when you state a theory you have to make sure it is provable and disprovable. So let's say my theory was the light bulb is bad. I pull the light bulb out. I don't really have another way to test that I don't have another lamp in my office. So I put a different light bulb into the lamp and that one works. Okay, technically I fixed my problem. But I still haven't confirmed if the light bulb I removed is actually broken or not. I can still theorize that it is I mean after all the new bulb works but I would need to screw that old light bulb into a different fixture and see if it is still flickering there. If it is There, then I've proven the bulb is broken. So it's really easy to get into the idea of, Oh, I just want to test this and get the problem fixed. But really, you want to get yourself in a mindset of I'm going to do tests that prove or disprove what I think is broken, because that way you're you don't have to come back to that again. You know, it's, it's, it's the person who takes the flickering light bulb, unscrews, it puts it up on the shelf, gets a new light bulb screws it in, sees that it's working. Okay, but leaves that potentially busted bulb still on the shelf, like you never finished all the tests.

Barry Luijbregts  19:37 

So yeah, that that is then the scientific methods. Yeah. And that's another technique that can help you to troubleshoot.

Don Jones  19:43 

Yeah, and, and that's the big one, and that's why we say we want a wolf proof fence. So if I test a thing, then I have either proven or disproven a theory and that will lead me on to other theories, but it needs to be self contained. Right, I couldn't, I couldn't unscrew the light bulb, and then screw another one. And I've brought too many factors in here. Now, you know, what if the new bulb still flickers, I actually haven't proven anything. I've kind of wasted my time. It could be another broken bulb. It could be the lamps still. I didn't disprove anything. And so I'm now going to spend more time coming up with a real test. Right? 

Barry Luijbregts  20:27 

And then those tests are also should just test one thing, right? It is very important to control your variables.

Don Jones  20:35 

Yeah. And that's, that's another piece of the scientific method is that you have to clearly state what your theory is. And then clearly state what you are testing. You know, it's not enough to say I can't get to the internet. All right, I'm just going to go you know, flip the circuit breaker for the whole house back and forth. Wow. Like you're testing a whole bunch of things right there. And it's very time consuming to test at that level. It would make sense A lot more sense to test Five little things. Because if each of those can definitively eliminate a potential cause of the problem, but you know, if you turn the circuit breaker back off and back on again, you rebooted the whole house, and it still doesn't work. Like now you're right back where you began.

Barry Luijbregts  21:17 

Right. Plus, if it worked, then you didn't know why you fixed it. So you didn't know the actual root cause?

Don Jones  21:24 

Yeah, because you've rebooted your computer, you've rebooted your cable modem, you've rebooted your Wi Fi access point, like, You changed several different things all at once. And so if it happens again, you still don't actually know what the problem was.

Barry Luijbregts  21:39 

Exactly. So that's a very important thing as well. So I once worked on an application that had some mathematical equations. And I didn't really understand what the equations did, but I knew what went in them and what the outcome would be of them. So I had this sheet and I just would program these equations. into the application, I would put the input in, and then the outputs would show and everything would be fine. But except one of these equations just didn't work for me. Whatever I put in them, the outcome was never correct. So I started troubleshooting. As in, I started to tweak the equation, where I didn't really know what it did. But at some point, it worked for me with multiple types of inputs. I just got the correct outputs out there. But I didn't really know what changed. And the trouble with that was like three months after, when I was working on something else, it actually did break because somebody inputted something that I didn't foresee. And the equation was actually not correct yet. But I thought it was, but I just didn't know the root cause and so I couldn't actually test if I actually fixed the issue. 

Don Jones  22:56 

So that's yeah, important. It is and most of us In all of life, we deal with black boxes. You know, we deal with a lot of things that we don't necessarily know what's going on inside your, your microwave, your Wi Fi router, your television. So the idea is in troubleshooting is to get to the smallest black box possible, like eliminate as many of them as you can. Like, I realized that television is a very complicated device. But I can only troubleshoot to the entire TV, I can't crack it open and have the ability to test the individual circuits inside of it. So what I'm going to do is maybe test it with a couple of different devices to see if the device is the problem. I want to get it down to as few black boxes as possible. So maybe I'm going to disconnect every single HDMI cable and just see if the TV by itself works with nothing attached to it but power. Because if that does, well that's my wolf proof fence. I now know that the TV as a black box is probably fine. Maybe it's my Roku box, or my blu ray player or something else. And now I can start building fences around each of those and testing them, even though I can't fix them inside. My solution just might be, oh, I've got to go get a new blu ray player. But I took it down to that smallest black box level.

Barry Luijbregts  24:19 

Right. For you for your level of knowledge of the problems of permeability. Yeah. So in it, what we often get is when the support department, they just give us a bug, or an issue from a user, that might be a bit vague, might be like, something went wrong, basically with almost no info. What happened to me a lot is that I get these bugs and issues and I couldn't really reproduce them, not on my machine and not even on the test environment and not even in production. These are the so called no repo bugs. What do you do in that case? Because it's actually Something that a user experiences, but I can't really test it. 

Don Jones  25:07 

You're, you're you're kind of just on the first step, what you've done there is, you've, you've built a wolf proof fence, but you've got to be really clear about what it contains. It contains this particular dev or test system. It includes this particular set of data. And so you write down what those assumptions were what is inside that fence, then you need to probably go to the users machine and see how many of those assumptions are true there. Does it still work? I'm going to use the same set of test data does it work when I use the same process? And that's where you'll often find out that you weren't really building the wolf proof fence. You thought you are the users like oh, no, I have caps off caps lock on all the time. Ah, okay. I didn't take that into account. And so my fence had an opening I need to, you know, include that in there now. So it's it's really about understanding the environment. It's in a good analogy, you know, to go back to the wolf is the title of the wolf is how do you find a wolf in Siberia? If you're looking in Alaska, you never will. And so you've got to troubleshoot the actual problem, not necessarily a recreation of the problem. And that often means, you know, going on site, if my if my air conditioner in the house broke, and I called the manufacturer support line, and they said, Well, our air conditioner here is the same model and ours works just fine. like nobody would say that that was a valid troubleshooting test. Obviously, my air conditioner is different from the one they have back in their office. And they're going to have to come out here and test my air conditioner.

Barry Luijbregts  26:47 

Yeah, that's, that's such a powerful thing. You know, as a developer, we all always try to reproduce the bug. Yeah, of course, because then if we fix it or if we thought we we have a fixed then we contest if it actually occurs again. But obviously, you should then also look at the actual problem that happens. So do logs on the server when it actually happened, or wherever that was maybe on the user's machine.

Don Jones  27:13 

Yeah, and creating your code to have diagnostics is important. If I was able to crack open my television, well, I'll do a better one in my car, because I actually can work on my car a little bit. I know I've got the manual from the manufacturer on how to maintain that thing. And I know what to measure. Right? It says, okay, you know, if you measure the electricity here, it should read this. airflow should be this. Those are all things that if I've got the right equipment, I can get in and measure because they have instrumented the car to make troubleshooting easier. They've created instrumentation to make it easier to build a wolf proof fence around a certain system. If this voltage is at this level, then it is not the charging system, you move on to the next possible thing. Some software developers are Really good at that, and others are really not good at that. To go back to query example for as an example, it's, it's a good one, if you've taken the time to put a query into a variable so that you can set a watch on it in your in your IDP, then you should also probably have a debug debugging option that an end user could turn on to output that same thing to a log, like whatever you would do in your ID to troubleshoot whatever you would look at is something that should be enable, so that the end user who doesn't have an ID and doesn't have your source code can produce the same diagnostic information that you would be accustomed to looking at in an ID so that you can kind of debug over there without having to worry about all the variables that are involved, like all the variables just get built in when it's when it's running in that situation. 

Barry Luijbregts  28:53 

Yeah, totally agree. Absolutely. Log everything that you can and if you if you don't have a very sophisticated tools, you can always use print statements, as in just write out some text. This happened at this point in time with yes type of variables. Yep, that's always useful even for performance improvements and things like that. So as we are coming to the end of the episode, let me just share one big failure of myself. I was once troubleshooting a system that had a bug, it was a website, a very busy websites with 10s of thousands of people that were actually on it on production. But the bug was only in production. And it was a very big one that actually cost a lot of money. So we couldn't really track it down. In the end, we've decided to debug production. And this was before you had the snapshot debugger. So you actually had to attach a debugger to the production process in the server in production. We did that. And then I hit a breakpoint. And I was just sitting there with Looking at the variables of the breakpoint, and then I found a bug. And later on I fixed it. What I didn't realize was that whilst the breakpoint was hit, the website was actually stopped. Yeah, because the probe was stopped. And so the website was actually down for a couple of minutes for a lot of people, which also cost a lot of money. This was a quite a big mistake of mine. Now, nowadays, you can use a snapshot debugger, which is something in Azure, for instance, that also attaches to the process, but just emulates debugging. So it takes a snapshot of the current variables and then sends that over to you as kind of a log or something that you can use on your local machine without stopping the process. There wasn't very smart of me, but it happens. You know, you live in you learn. Do you have any of those?

troubleshoot troubleshooting failure stories?

Don Jones  30:58 

Oh, gosh, yes. We we have a cabin up in the mountains. And if you have a cabin up there, it's hard to get contractor so you kind of learn to do everything yourself. And we had when we came in after reopening it after the winter, we've got a few things that we need to check for. Just you know, burst pipes, things like that. So we checked under the sink and really good about Okay, well, we'll turn the water back on, but we'll have someone out by the meter. So you can just scream real loud if there's a leak and we'll shut it back off and everything was okay. This was a couple years ago. And we were there for probably a week or two. I went outside to water some plants and we have a hose connection on the outside and I opened it up and everybody else comes screaming out of the cabin. Shut it off, shut it off, shut it off.

It's like I crank it down real quick. Well, we had forgot to test the complete system. The pipe was fine running up to the hose connection. But that hose connection is actually that a 15 inch long metal tube that sticks through the wall and When you open the valve, it had burst over the winter. And so it started raining into the basement. And really thought we had proactively been troubleshooting the system by turning it on and running around and turning all the faucets on. But we didn't build our fence, we left an opening, because we didn't test everything. And so, we we have something that's a little bit like a unit test. It's a checklist. And so now we have a list that includes that. So we added a unit test for our hose bib on the outside. And now we know to check that and have someone standing by on the water just in case. It was a huge mess. It cost quite a bit of money to fix. So we definitely learned the lesson. Well.

 

Barry Luijbregts  32:44 

All right. Well, this has been absolutely great. I've learned a ton about troubleshooting. If people are looking for more information about troubleshooting, obviously, they can read your book. Are there any other resources that you can recommend, especially for developers, where they can learn about troubleshooting.

 

Don Jones  33:05 

Yeah, there's not unfortunately there's there's, I believe the website is troubleshooting.com. That was the resource I first came across. Some people may find that that his approach works really, really well. For them. It's not different than mine, he just kind of explains it differently in a way that didn't land as well, for me and some of the folks had spoken to. I think it really gets down to practices. I think, what I would suggest, especially for a developer is to read a lot about debugging, like even if you're an expert debugger, that's fine. Read about it, because every time you read a technique, okay, you know, you can put a breakpoint here and then go set a watch and look at a variable. Think about how could I do that proactively to make troubleshooting easier? Like what is it I could have done in advance to make this whole failure process simpler for me and adopt those as some of your standard practices, it's obviously going to differ between languages. toolsets and platforms. So you know, whatever yours is really figure those things out and think about what is hard about this and what evidence could I be producing proactively to make it better?

 

Barry Luijbregts  34:12 

Right. That is excellent advice. Thank you so much for your time. We'll see you next time. Bye bye.