The last section we looked at doing numerical integration using equally spaced notes. But it's not hard to see that this is not an ideal situation in every case. For example on this function we have to use a fairly small h in order to resolve all these rapid oscillations on the right hand. But we get our final result and left it with the larger age. And that would be more efficient. Overall. Speaking generally there are two very desirable features in any practical approximation method. One of those is controlling or at least estimating the error or the result. And the second is the ability to adapt to features of the input. Usually happens at the first is a prerequisite for the second. Let's figure out how to estimate the error in an integral calculation. Called the Tn is the trapezoid formula using n subintervals With three evaluations of t we can extrapolate to values as Simpson's formula. And then we can extrapolate one more value at sixth floor. If we use the better of the Simpson values as our estimate of the integral. And the final R value should be a good estimate of how accurate Simpson's NYU is So we'll call the difference between these two capital E. And we might want to use the R value as our result. Since it's supposed to be more accurate that is allowed. But if we do that then we would have to accept that our estimate is probably too pessimistic. Or adaptation strategy is a common one in computer science known as divide and conquer. The ideas that you've terror problem into smaller pieces that should be easier to solve. You solve the sub-problems and then you glue them back together. In our case the tearing step is to break the integration interval in half. Each piece should require fewer nodes to resolve than the original. So they're easier versions of the problem and then we can just add the results together. So we have the following outline. We want to integrate f over the interval a b. We try S on four nodes and estimate the error and if that's not accurate enough then we bisect the interval. We integrate over the two pieces separately. And then we add the results together. Not the integration subproblems are of the same type as the original. So we can use a recursive implementation. The last thing we have to specify is what a good enough result is It's common to use a criterion on the error estimate E that has both an absolute tolerance. For when the result is close to 0. And a relative tolerance which takes over when the result is very large. Finally we can once again do some recycling of function evaluations in this recursive process. Initially we're using two function evaluations to get T1. Another one to get T2 and two more to get T4. And then all the extrapolated values we need. If we decide that we need better accuracy than when we bisect the interval. We can carry down three function values with us to each of the subintervals. So that each new subinterval and only needs two new function evaluations to do its thing. Here's the implementation from the book. The outer function doesn't do much besides call the inner function. That's the actual recursive algorithm. This function expects to be given the three function values that are being recycled from the level above it then does the two nu f evaluations. And from these it calculates the three trapezoid values and the extrapolations. If the error is acceptable the values return and this branch of the recursion is ended. Otherwise it calls itself on the sub-problems and add the results. Here I have a function that very slowly on part of the interval and much faster on the second half of it. I use matlab built-in integration to find an accurate value for the integral of that function. And now I'll call the adaptive integral method from the book. And I'll ask for a tolerance of about ten to the minus four. Since I is pretty close to one in magnitude doesn't matter whether you consider that absolute or relative. So it comes back quickly. The actual error though is a little bit bigger than that error tolerance were requested. So this tolerance is not a guarantee. It's just an estimate. The second output from adaptive integration is a vector of nodes that were used to compute the integral. So here you can see as you would expect a nice large spacing on the left half and then it gets finer and finer and finer is you go to the right. Next I'll do an experiment. So I'll choose decreasing tolerance is from ten to the minus four to ten to the minus 14th. And notice for each tolerance what the error actual error was and what the number of nodes plus c As you can see as we decreased tolerance the error never does quite match up and be as small as the tolerance that doesn't happen in every problem it's problem dependent. But they do create decrease pretty much in lockstep. Meanwhile the number of nodes has to increase to get more and more accurate results. If we plot the errors on a log-log scale you see almost a linear convergence. So this is almost appear order convergence with one strange hiccup here. Now for each value of the tolerance it picks as few nodes as it can get away with to give that error. If we took that same number of nodes and then just distributed the nodes equally. What we were doing before. How does that compare so here I'm not worried about efficiency. I just write a T and an S to do the trapezoidal rule and the Simpson's formula. Then for each of my experimental tolerances I just evaluate the Simpson formula at the same number of nodes as the adaptive formula used plot that for comparison. And the dashed line is perfect fourth order convergence. So naturally that's what this formula does. It have to formula at least for a while tracts that same slope. Although it seems to be a little bit steeper towards the the low end of the tolerance here. But for every particular number of nodes you see the adaptive method is about two orders of magnitude more accurate than the equally spaced method. And so even though there is some overhead for doing the recursions and all those things. Depending on the integrand adaptive can give you a very large savings and computational time
FNC 5.7: Adaptive integration
From Tobin Driscoll October 25, 2019
35 plays
35
0 comments
0
You unliked the media.