Great pleasure to have to start our all seminar series. We have a few talks schedule, but there are many open ones. You have an idea of someone invite plenty of opportunities as it is dying. Some of you know because he did his Phd with Sania, finished I think around 2016. At which point he went to a postdoc in Germany and move field slightly from one aspect of massive stars, he had previously been doing V star star formation. Then he went to a postdoc in Belgium. Although I gather he had to spend the in the covid era, so he may be misspelled. Red giant, super winds, again, as I understand massive stars, but now they're cooler. Then he switched and went to the National Solar Observatory post where he has been working on magnetic fields in the Sun. This fall he's going to be starting a permanent staff scientist position and daughter just down the road. We hope to see a lot more of Dylan in the years to come. Thank you. Yeah. Be back and see everybody and be back in this room again. As John said, I've been at the National Solar Observatory the past couple of years working on MHD, consistent simulations of solar eruptions from the photosphere through the low corona. This is some work I've been doing with the collaborator, also at the National Solar Observatory and then some folks at the Naval Research Lab and Goddard. I think a little bit more colloquially, what I'm going to be talking about here is how to initiate an eruption and have your boundaries screw up en P for you. This is just to give you a little bit of an overview of where we're going here. Give a little bit of context motivation and then talk about the simulation methodology that we're using, specifically characteristics based MHD. Then the rest of it, the actual results, are broken into two sections. One is how to get things into the box using a data driven method and then how to get things out of the box. More general, open MHD boundaries context wise, a lot of the open Met problems and solar physics these days, things like kernel heating, initial conditions and evolution of eruptions, origin of the solar wind components, they really all depend upon a detailed knowledge of the MHD properties, the distribution of plasma and magnetic fields in a three D volume, and ideally in a time dependent sense, observational run into. We can get some of this observationally because we could do spectral polarimetry. You can get good measurements of magnetic field components. Maybe by feature tracking. You can do tracking of velocity, you can do Doppler measurements. We can get a lot of these, but often these are limited subsets of the volume. What I'm plotting here is some radiated transfer calculations from an MHD simulation just through one vertical slice. This is the various colored lines, or where the photosphere would be in different spectral lines. And then the shaded regions for more optically thin features, the region that would be traced by these features. You see that we have a fairly difficult time getting a full three D picture here of the volume. What we do have, however, is good of the photosphere. For instance, the iron doublet at 603-01-6032 which is actually the line that I was showing in the prior slide from real observations. This is a good tracer that's pretty well constrained to pretty cleaner layer of the simulation. Although of course the copy out even to that, this drops almost by a megameter. If you're going into the umber of a sunspot. You have to be careful even if you're doing these things that are on average pretty. Colin, we might want to do if we're going to try to take this back out. Some three dimensional distribution of plasma is to just take observations at that one layer there. We can even do time dependent observations at that layer and then feed these in as a boundary condition to our simulations. And then try to, whether it's by static reconstructions or dynamic reconstructions, try to back out the three dimensional MHD properties based on information on this one plane where we do actually have some pretty good constraints on the other end of things, once we actually do get something going and get an eruption in our simulations. To make sure that we're tracing its full evolution. What I'm plotting here is some work from a collaborator of mine from last year, where in this initial set up, we're looking at again one vertical slice through a simulation. This is into the plane magnetic field strength. You see this little feature down here at the bottom. That's a plus tube that's been introduced into the initial conditions of the simulation. And we've set it up so that it's unbalanced. That will erupt if you let that go. It begins rising. You just get a little bit messing around, and then you get boom, and it pops up. You see that there's all this volume that we're wasting at the top of the simulation. What we would really want to be doing is try to cut down on costs here. Once we do get something into the box, we want to make sure that we don't have to simulate all this free space off the top. Unfortunately, if you do that and try to just cut to this layer and then run the same simulation again, what ends up happening is that the boundary conditions really mess around with what the eruption looks like. You actually see the eruption pancake onto the top of the simulation here. This is the two bracketing problems we're dealing with. We want to get things into the box, but then we also want to make sure that things get out of the box, that our boundaries are not messing up, either the way things come in or leave the simulation. That could then brings me to the methodology that we actually want to be using here. We really need something that is going to represent this information propagation both into and out of the box. Some of the constraints we're dealing with here are that the regions of interest are physically low ish beta. They're not so low beta that we can really just simulate the magnetic field and not worry too much about doing on it. Nor are they so high beta that we could really throw out the magnetic field and just do a plasma simulation, assume the field comes along for the ride, they are not, that's really important. Most of the techniques that are used to re, to many of the methods on the market now are static reconstructions. There are ways of getting this one plane of data and extrapolating a magnetic field. Often you assume this is for free and static, but the regions of interest, especially when we get into these eruptions, are neither. Now, that's both a drawback and a benefit because if we want to do these static reconstructions, we're throwing out a lot of information and the dynamics and the history and the history of your plasma actually has a lot of additional constraints in it that we can really use to get this two dimensional volume back. Beyond that, on the other end of this, we want to make sure that, like I said before, once things get out of the box, we want to make sure that whatever we're doing, a sensibly open boundaries, which I've already shown you, are always open. And that they don't dominate what the evolution of the simulation is doing, nor do they dominate our computational resources. This brings me to this idea of characteristic space. I'll walk you through the actual mathematics, but what we're shooting for here is something that can correctly specify this. I really mean neither under nor over specify, which I'll come back to in a minute is a quite tricky proposition. In practice, you want something that doesn't under or over specify the MHD equations at your boundary surfaces, taking one layer on top of that. Because we have these time dependent boundary conditions, we want that boundary condition, this data driven boundary condition to follow some prescribed time evolution or to follow some physical properties that we want to preserve for the plasma. For these more generalized open boundaries on the top side. Then to the most complex, which is actually, this is where we're living for our simulation boundaries, where we actually want to impose some MHD properties. We're doing this from observations. There are observational biases in this. We have bars on all of our measurements. We have uncertainties on the spectral inversion. We only get data at certain time steps. We have to do some time interpolation here. This means that for a simulation, which is going to, in general, if not always have much finer time stepping than the actual observations you have, you end up with many states where you can't really guarantee that the way you've done a step from one input to the next is going to be a 100% MHD consistent either in individual variables or in the ensemble. The ensemble. What we can get at a characteristic space method is if you take each side, we're respecting the fact that information enters and leaves the volume, and that it does so at very specific speeds. These are the characteristic speeds of MHD, as why characteristics, how information propagates along certain trajectories in space. These are the infection speed and then that plus plus and minus the magnetosonic slow speed, alpine speed, and the magnetosonic at speed. We just want to make sure that as we're doing all of this, we respect that each of these modes is an intra combination of MHD properties that we are really guaranteeing that we treat MHD properly. That's like I said, a lot of motivation, a lot of background of the method, the actual method we're using itself is in fact nothing particularly new. This goes back to the late 1970s. People have been using method of characteristics to set up MHD. We're just using it a little bit differently here what we do, we take the MHD equations and we recast them into this funny looking vector format. We get one vector equation with matrix coefficients. For instance, this is in the z direction here, in the boundary normal direction. For simulation, this is what this coefficient matrix might look like if you constructed these for all three orthogonal directions. Multiply back through this and split this back out into eight separate equations. You're just getting back to MHD. This is just a different way of writing MHD. What it gives us is these matrices, and these matrices are diagonalizable in general, not simultaneously. If you could diagonalize all three of these coefficient matrices at once using the same eigenvalues and eigenvectors, you would be analytic solutions to MHD. We only get to those in some special cases, but we can diagonalize each of these matrices separate very readily. What that means is that if you go into a split method where we treat each direction separately from the others, we can get the actual modes that are the eigenvectors and eigenvalues of MHD. These are those characteristic modes. Like I said, they're really messy once you multiply them all through here. But the important thing to notice here is that there's this nice pre factor up front, which is a speed. We can split up these eight modes, which are each some complicated intercombinationMHD properties and their spatial derivatives. We can categorize them to be one mode that's enforcing that equal 01, that is an entropy action mode. Each of these travel at the invection speed plasma. Then we get two modes that are alpha two modes that are like the magnetosonic slow mode, and two that are like the magnasonic fast. We can actually, taking these back together, I've tossed all of the sideways transverse terms into these matrices because I don't really want to look at them right now. The MHD equations will flow off of several full projection screens if I do that. But for each of the individual modes, you see that they come back together to form MHD. Again, like I was saying, you have this one that equals zero. The entropy modes, the alphanic modes, slow, fast, then this last or tap on the end. That's all the transverse and inhomogeneous information. Gravity is in our inhomogeneous terms here because it's not based on a spatial derivative of the underlying variables, we can lump it in on the end. This method will treat things like that. In practice, you can also treat things like thermal conduction or radiation under the assumption that you underlying framework is close to an ideal MHD frame work. This will break down the way that we're doing this if we get to extremely fast propagating modes, which are transferring information from cell to cell, not throughout the entire simulation, in less than a time. Radiation is okay, thermal conduction, we be a little careful with, gravity is fine. Yeah, as I was saying, we've these different modes we can separate them out. These build up how MHD looks and if we go to any individual cell on the boundary. We now have the benefit of being able to figure out just by what the velocity of the cell is, how many modes are entering and leaving the cell, and where they basically are propagating information from. If you're at zero velocity, if this blue line at the bottom is the actual boundary surface of simulation, then across this boundary we have one slow mode, one alphanedeeast mode that are entering the simulation. We also single cell, we have ones coming from higher in the simulation volume. These ones up at the top, we can actually just calculate. We know all of damaged properties and the spatial derivatives there. We don't have to do anything with a bound condition on that one. But at the bottom we know we have three modes to work with. And we have three modes that are actually leaving the volume that we don't really have to care about because they're carrying information away and they're not going to do anything to our simulation. In this case because of zero velocity, the top and B modes have zero amplitude. So we don't need to really worry about what they're doing either. They're not propagating nor do they have any information content. Similarly, if we had action out of the simulation between the alpine and slow speed, we can split this up a little bit differently. You see the entropy and give modes come back, but they're actually leaking volume. We still don't have to worry about them for our boundary condition. Practically speaking, what's happened here is because we have to faction out, it is faster than the slow speed. You can't get information in at the slow speed, and we flip that one mode out of the volume. We could do this cell by cell over the entire simulation boundary. And what we end up doing is taking the incoming modes that we are allowed to use to set the boundary condition. Basically invoking the MHD equations. We know it's time derivative because that's prescribed by our observations. We know the transverse and outgoing modes from the MHD simulation itself. And you back out what the best incoming modes to get you as close as you can to that time derivative end up being. Now this is all based on this characteristic mathematical description of MHD in general simulation. The rest of it may not be based on that. From a practical perspective, what we actually end up doing is writing a complete separate MHD code, which is fully based on just the method of character. But we end up spanning the entire scale of our base simulation, this large blue box in the lateral directions. But we only go just barely above and below the boundary additions. For this case, we use four grid cells in that boundary normal direction. That's actually some padding we keep for some future things where we may want to have some higher order spatial derivatives. Yeah, we only need one cell inside your simulation. One cell outside, this shows the bottom, but presumably it actually goes. We're going all the way around six separate MHD code. We call this one MHD code that spans one boundary. We set it up in a direction agnostic way, so it doesn't care about X, Y, Z, it cares about 123. Only wiring you need to do to get it on all sides or onto any grid structure for your base. Mhd code is a method to map from your face codes grid and onto the boundary code. In fact, we haven't tried this yet, but I'm hoping, I think, that this will work, that we could actually even go to things that are like a pick code rather than an MHD code. As long as you can back out a grid of MHD properties from whatever your base code is, that base code presumably does not actually need to be MHD itself. So that's a lot of background, a lot of set up to what we're talking about. Now I get to show you all the cool results that we've managed to back out in the last couple of years per, isn't it? Oh, okay. The same. So they're not on line, they're actually different. Yeah. I'm getting two different screens, so Okay. That's great. Perfect. Thank you. Okay. Well, for the folks online, you'll get you'll get slides later. Sorry about that. All right. So we've got getting things into the box here now. That's our first set of, that's our first set of validation of the method here. And conceptually, I skipped a slide here, lost the slide, my final version. Anyway, we want to do two different things. One is Making sure that we know what happens if we don't provide data often enough. And we also want to make sure that we know what happens if we don't provide all the MHD variables. We basically insert errors or assume we can't back out some properties. Because if we know everything all the time, this is very uninteresting. You can get exactly the MHD simulation you started with all the time. Whether or not you do this, the validation really needs to focus on times when we miss information. Okay, for the cases where we get things into the box, in a normal MHD simulation, you can think about things into, we know that spatial derivatives, that's equivalent to knowing all the characteristic modes and you're using those to solve for the time derivatives in the data driving case. Instead, you know or approximate time evolution and you know some of the spatial derivatives, the ones inside the volume. Which is equivalent to knowing all the outward propagating and transverse propagating modes. But you don't know what's coming into the box really. What you end up doing is, like I said before, you solve MHD backwards. You solve for what is coming into the box in order to get as close to the known or approximate time evolution as possible. And then now you know all your spatial derivatives and incoming modes, you go back and we do solve for the actual time evolution imposed by that set of modes in every direction. Because that is actually going to be MHD. And what we back out for the incoming modes will not necessarily be able to always get us this known or approximate time evolution, it's guaranteed to do MHD. And then we get as close as we can to the observations for our validation case. Here what we're doing, run a base MHD simulation which is quite large, as you can see in the vertical direction, and we embed a Spira into this and we pressurize the inside of it so it expands. You don't really need to know anything about what a Spira actually is. For our purposes, the only things that matter are closed surface. So we can start out with a totally magnetic magnetic field portion of the volume, and then we insert magnetic field into it. If you take some slice through this in a hand weaving, if you squint your eyes really hard, it looks like an emerging active region. That's really nice. We get two polarities flock. They are roughly arranged next to each other the way that an active region might be. Then for a validation case, what's really important here is that if you take any slice through the sphera at any given time, there's a broad range of Alkane, slow and fast speeds and their relation with respect to the expansion speed of the sphera. We're really testing all of the different lungs of different numbers and different configurations of incoming modes. Here's where the slide that I thought was earlier it was. So what we want to invalidate this is if you miss data in time, we want to know what that'll do. If you miss some of your MET variables, what does that do? The first one is comparing driven and ground truth simulations when we drive with different data. The left I'm going to be showing is, is the ground truth itself. You see the field lines. The vertical plane that's somewhat transparent is plotting vertical velocity. And the horizontal plane, which is all black right now is plotting normal component of magnetic field BC. Then the middle here is if we only provide ground truth data every 20 CFL MHD steps, and then we do some interpolation in between the right if we drive every hundred steps for this case, because we really want to try to hit the method pretty hard. We just do a linear interpolation in time. All of the MHD properties between ground datasets are just linear interpolations, which we know is not going to be MHD in the intermediating times. What ends up happening is we watch this do a couple times. You can see a 20 times steps, the region above the flame here. That it's the actual driven simulation, It's very well in agreement with our ground truth. However, by the time that we get up to 100, you start seeing that some of the field lines don't quite agree. Notably, what you actually see is that on this vertical velocity, you start seeing all these tanks on the front expanding into the upper volume. We can tell 20 is pretty good, a hundreds not so good look at that a little bit differently, if we actually take some slice through the simulation ground truth again is the left 20 5,000 And from top to bottom we've got density on the left and energy density on the right. Three components of velocity down the left. Three components of magnetic field down the right. Yeah, you can see that at 20 we get very good x at 20, although line won't be able to see that but that's okay. I can point, I'll do it this way. That's good. Okay. All right. So you can see that got 20. We're very well in agreement. By the time you get out to 50, you start seeing some subtle differences. Notably in the vertical velocity component here, by the time you get to 100, it's really quite meaningfully different. Get the question, Go back to the previous slide. Yes. So you're saying driving every 20 CFL steps, 100. Yeah. Assuming that the amount of data you have is only enough to resolve every 20 CFL steps or 100, is that? Yeah. So, to kind of give you an example here, these are running. We were getting a CFL step every probably a couple of seconds of simulation time. One of our big targets is using HMI data, satellite data, and we get magneto grams of the full Sun every 12 minutes. There's going to be quite a number of steps that we are missing. We want to try and understand when do things go wrong and why. What are you doing between interpolate? For this, we're doing linear interpolation. We just say, well, I know magnetic field here and I know it here. And I do a linear interpolation in all three components. We can do a little bit better than that. There's methods that use the induction equation to get velocity components. And you would be able to, based on those, you could cheat your boundary forward and get some ten interpolation. But here we're just doing linear interpolation in all eight variables independently as completely ignoring MH. Try and hit it as bad as we can. Yeah, What we want to know is why did it go wrong? Go wrong, 20-50 and then really at 100 because that's probably not intrinsic, that 20 CFL steps is okay and 50 is not. It depends on what's actually going on in your box. This was already work that was done in 2017 by one of the co authors on this paper. Reinforce that even with a good MHD preserving method like this, you end up with the same result. The answer ends up being that you need to resolve the typical time scale for things to move or grow if you have some flux patch that you're seeing in one observation, and then you see something of the same polarity separated by three times its radius and the next one, there's no way to intrinsically know for the driving. Did this disappear and this appeared, or did this just migrate from one location to the other? The way we're doing it always disappears and this always appears. But that's probably not the right answer. If we did some of these more clever interpolation methods, we could get around this. But even if you don't do clever interpolation methods, if you only move a feature by its own size of its own radius, turns out that's actually okay. That's our threshold. If you move it less than its own radius, it's all right. If you move it more than its own radius, not all right. Similarly, if you double the size of something in between two we driving inputs, that doesn't work very well. If you do less than that, it's okay. That's hand waving the threshold that we found for this simulation, what we've done is just looking at, okay, well, the density magnetic field velocity, drop this dash line on here by y to get an expansion velocity. And turns out that that's about 35 CFL time steps, again, just by just handwaving about 35 CFL time steps for this feature to grow by its own size. And what we can do quantitatively instead of qualitatively looking at these images now is take a mean standard difference between our ground truth and driven simulations. What we're doing is for each individual MHD property, we're taking cell by cell differences. And then we're beating it by the inverse correlation matrix to basically just get a scale on all of this. But basically what you're getting is an average difference between the simulation to orient you. A difference of one in this scale would be one MHD variable is on average out by. One standard deviation. The red here is pretty low down, that's driven every 20 times steps. The purplish is driven every 50. And the top panel here is at the boundary condition at the actual driving layer. How different are the simulations? The bottom panel is 15 cells up into the simulation. Far enough away from the boundary condition that information is propagating into the simulation but we're not really seeing the driving directly of the boundary. We can see is that in both cases is very low down, 50 is not. Importantly if you coded this zoom in here at the layer of the driving when we crossed this threshold, you never get back down to zero error again the simulation because we're always following MHD, it can't get back to what you are asking it to do because MHD has allowed to basically you linear driving in between allows the simulations to diverge by enough that you can't get back to the ground truth anymore. That's actually a really important feature of this method is that you can't, you can't get back to what you're asking when you are explicitly saying, well this is right now, this method because it preserved MHD in between won't be able to get back to that driving. You can tell when it went wrong. That's unusual. Normally with other methods, you always get what you asked for because you asked for it too hard. Now, the other half, the validation for this is what if we don't have strong constraints on some property, For instance, mass and energy density on the plane are a good deal back out then the velocity and magnetic field components. Many methods will in general just say, well, I know a photosphere density and energy. And we just say that that's the whole photosphere and that's constant in time. And we never let that vary because that's pretty good average photospheric state. What we can do is because we're backing out these incoming modes using the time derivatives as our constraints, we can place different, different weights on the different constraints. If we don't trust density and energy, we can say, well, I give that one 1000000th of the weight as velocity and magnetic field that I do trust. That's specifically what we've done here. The weight is a typical data driving that just throws constant density and energy in there and holds them like that. Our middle is characteristic based. Data driving with that ten minus six weight on density and energy, we're still feeding in the light is not claim. Okay, we're still feeding in the same data drive. We're feeding in density and energy are fixed, but we just are not putting a strong weight on that when we actually do the inversion to get the incoming modes. And do you sort to remember what the simulation looked like before? Now we're seeing these sorts of kinks in both cases. You might be wondering, well, is this really actually better than the typical methods? But if we go back to those error metrics, I was driving before, our method is with the long dashed line, typical data driving is with the dash line. And we are doing actually meaningfully better. We are not doing as well as if you had every MHD property and you actually had constraints on all of them. But that's probably not surprising. We really are losing some information here. One of the things that we're really pleased with is if you look at the final snapshot of all of the simulations here, the footprint of this expansion, which is density on the vert, its footprint on the boundary, is much, much better in match with the simulation than typical data driving which has basically stunted the expansion of the spear. To wrap up, the last thing I want to talk about is getting things out of the box. For the data driven boundaries, we know the time evolution, we know some outgoing modes. We don't know the incoming foods. If we go to open boundaries, we've lost a crucial piece of information here. Now we actually don't know a priori what we want those boundaries to be doing. We may know that they are going to want them to trace some physical property. May be able to back out how to represent that in terms of the MHD equations. But we don't actually know a time series of MHD boundary conditions to target. We also don't know the incoming modes. We don't have enough information to back out a real solution here anymore, we have to impose. What's done is to predefine how those incoming modes behave and then send you back to enough information. Again, that predefinitionoughdfesuallysidedtt basic type of boundary condition is called non reflecting boundary conditions. That turns out, when you go to three D, what is non reflecting becomes ambiguous. The definition would be that incoming modes are not affected by outcome modes. But you've got all these transverse modes two. And it's not really, again a priori clear what you should be doing about these. There's two methods that bounce around in the literature. One is nothing changes the incoming modes. You take your initial conditions, you compute based on your analytic or numerically set up initial conditions. If you extrapolate them a little bit beyond the boundary of simulation, what would those incoming modes, you fix them, they never change. That's it for the whole simulation, because nothing can change them. The other is to take the transverse and in homogeneous terms and set those to somehow describe what the incoming modes are. A common choice for that is that you set the incoming modes to cancel out their associated portion of transverse modes. Alph transverse modes. If you have an incoming alpine mode, you use that to zero out those transverse alpine modes. Actually, for the nothing changes. A pretty common scenario is that you don't have incoming modes. For instance, if you think about a blast wave, you have a uniform plasma and nar the boundaries, I sometimes refer to these, nothing changes modes as incoming zero because the initial condition will be no incoming modes. That's true for both. The case I'm going to show, the fact that I'm talking about it here probably presages that all is not well. But this was what I started out with, just trying to make sure we understand how the characteristics behave. We thought, okay great, We set up a hot sphere, higher energy density. There's an angled magnetic field in the background here, which I don't plot to keep this from getting super busy, but it's x equals y equals z. It's just a 45 through the whole simulation volume. Since this is high energy density, it's unbalanced. We let it go. It expands this plane in the middle there that I'm plotting is vertical component of velocity. You see you launch two lobes. It comes back into equilibrium with some hotter reach and still living where you start out. And the simulation is periodic and all the side boundaries, you see things come out one edge and in other edges in a confusing way. But this is what we expect the gun truth to do. If we go to the two different types of the two different types of non flight boundaries I've talked about with this incoming constant zero on the left and cancel transverse on the right. We let all run as things go, especially in the vertical velocity component. Late in the simulations, you can start seeing some detailed fine scale differences. But the overall behavior is very similar. This would really be doing pretty much what we expect. There is some loss of information because the simulation is smaller. We don't expect identical evolution. But qualitatively, these are in very good agreement with each other. So that seems like, oh yeah, this might be an oversimplified view of like flare or something like the little red been needed or physics. This was really just like we started out with testing these different modes. We started out with various shock tubes and putting an alpine mode only in putting on, trying to set up some subset of fast and slow modes. This is just the most interesting to look at of those functionally wave tests. Yeah, this is just testing the method. Moving on from that though, we actually get to something that is more like what we would want to be simulating here. For instance, if you launched a CME, a common model for criminal mass ejections is actually this sperm because it's a closed magnetic structure. Basically embed this in the wind and you propagate outward. And use that to predict how CME's propagate through the interplanetary medium. Our second test case then to try to go to something slightly more complicated, just this clonic structure and Inv really not doing anything fancy here just. Because of how we set this up initially, we set this up at the bottom boundary. You could flip this in your head and have this be outward complicating if you want. All we care about is you start with magnetic field. You infect it at some surface, it passes through the surface, it's gone. You get a lot of little tiny, hairy features from the field line tracing. But functionally, that zero magnetic field, again, there again, we want to compare the different models, see what happens with these different things if we ev this in between the alpine and slow speeds. I've looked at this enough times that I can see some slight differences. You see the canceled transverse goes a little bit faster, constant goes a little slower. But the three simulations are again like our wave tests, quite comparable to one another. We could be done here except that we can do this infection at different speeds. If we slow this down a little bit, now we've got a problem. Because now these don't look anything like each other. In fact, in our open boundary condition, if you held it, incoming constant is definitively reflecting. This is really something we're putting this into the literature just because I think it's an underappreciated point. Closed magnetic field structures plasma in any equilibrium other than a totally trivial one with no spatial derivatives in anything, depend upon the counter propagating characteristic modes. And they depend upon them being well balanced with one another sphere. Or even if one takes a case back to just gravitational stratification, that depends upon two modes. If you set the one at your boundary now, it's not balanced with the one that's going out and you don't have equilibrium anymore. That balance of incoming and outgoing modes becomes very complicated and very deep problem dependent. I'm really just presenting a problem here. We're still working on this to try to think about how can you define what those incoming modes ought to be in a more clever way that is more reactive to what the stimulation contents are and what is actually sitting near your boundary condition. Isn't this just saying that an outflow path characteristic speed, basically the simva, paragons, ads, all kinds of weird reflections and so forth. It's not even necessarily reflections because this is just like a Galilean transformation. The totally static stationary sperm is composed of six in each direction and three directions, 18 different characteristic modes, which are all balanced with each other. And for the sperm here, it turns out they're actually not one to one balance in the same direction, but because of magnetic tension and pressure interplaying with each other, you have sideways modes that sum up and end up getting canceled out by part of an incoming mode and part of an outgoing mode which then are also partially canceling each other. It becomes, yes, functionally, is a different way of saying subsonic, subpop speed flows. They have information going both ways in them and you really need to be careful about how you play with that. That's why I want to wrap up here just looking at some future directions here for the data driving. Our next step, which we're actually already launched into, is using this to Yumi at all 2020 data set. This was a really nice paper that took a lot of different data driving methods on the market as of 2020. Fed them all the same input physics and showed that none of them get the ground truth back. We're testing right now to see if we'll do better than that. Beyond that though, the data driving is actually ready for observational data. We have accepted case proposal which I'm hoping we'll still get observed this year. It's been a tough year with weather and fires, may not What we're doing is trying to use the observational constraints that I showed in the early slides, backing out some subsets of the three D volume in the chromosphere and corona using magneto grams from Decis. Then using these HMI, magneto grams that I mentioned earlier on as a driving data to see how well we can reproduce what is observationally inferred to be present in the three D volume for some active region evolution for the open boundaries. Um, this is really open work. We're thinking a lot about how, what do the incoming modes tell you about the structure versus what can you learn about incoming modes from the outgoing structure? Basically, what portions of the incoming modes would you want to cancel out? What portions of outgoing and transverse modes to reinforce certain MHD properties which I alluded to before. You can get to special cases where this is a solve problem. But in general for MHD it's, that's where I will wrap up. Thank you. Yeah. Data fling being used, tail the observers and the sun missions Serais like how often the mission Sm, that's one of the conclusions of the Smtp. These are the characteristic time scales of how things move around. We need data often to be able to do real good. What is actually done right now is not terrible, that 12 minute cadence, it's the quiescent build up, slow, active region emergence. That's pretty good. You can get that feeling from looking at one of the magno gram movies that HMI puts out. You can watch features move around and nothing like suddenly appears or disappears. But right when flares go off, it's probably not sufficient. Hmi has some other modes that give you different data cadences, but not over the full sun. We may be able to back out enough snapshots to get it around interruption, but for the whole sun, it's not a sufficient if anybody is working on developing physics models that might solve your problem, you go back to the right. Yes, that would physics informed AI models. There is a lot of that going on in the solar community as a whole. There is some really interesting, really interesting work I've seen recently published in the past couple of months, trying to reproduce or trying to basically create synthetic AIA images in different wavelengths. I have a feeling that things like that could help us fill in these gaps, which might then also help us work with the data problems that we have in different satellite missions. But it's not really been targeted to that specific application yet because this is not really totally filtered through the field. Since 2017 that we have this problem. People were arguing when James, the lead author on paper dimensioned talking about data cadence for data driving. People who are saying well you do really simple data driving, I'm not really sure that's going to hold up. We're hoping to hit everybody over the head with No, we did something really, really here and we still have exactly the same Bob. Yeah, so maybe soon, very much that she having a de Yeah, yeah. I've got into your other side of it, but now that she's retired. Yeah. A a. A A A A a 0. Okay.
AstroSeminar-01Sep23-DylanKee
From Stanley Owocki September 08, 2023
3 plays
3
0 comments
0
You unliked the media.
Zoom Recording ID: 7874089078
UUID: dnQBkQKzRVmgTmEj7E3+iA==
Meeting Time: 2023-09-08 05:09:20pmGMT
…Read more
Less…
- Tags
- Department Name
- Department of Physics & Astronomy
- Department Division
- Date Established
- September 08, 2023
- Appears In
Link to Media Page
Loading
Add a comment