Okay. All right. Hi, everybody. So today, I'm just going to try to Just review the sequence of things. The whole the path from raw data to finally the nice and shiny ERPs looking at us have ales. Before that Jacob. Do you look at the schedule of the next two weeks we've had an influx of s. Also next week. Asia is going to be talking about those two MM papers. I will add that to schedule next. And I believe today, er, you said you would give us some of the data to look at ourselves. Absolutely. So maybe in two weeks. I'll add this to the schedule, but in two weeks, we recently been here during our eve meeting to go over what we have, the data. E. Exactly. Back to you. That's right. Thank you, Jacob. So, these things that I'm talking about today is actually most valuable and maybe most interesting. To you if you're actually engage in data analysis. Now even if you're not engaged in data analysis right now, it's useful to see. But if you are engaged in datan if you're going to be engaged in data analysis, which I know you will be now. Then a lot of the e then you'll get a lot of questions that you maybe I wish I asked that question during the presentation. So I'm going to try to anticipate that, those kind questions a little bit. Let's see. I'm going to demonstrate some things too, so I'm going to start Mt lab. Okay, let's get Mt lab up. There, now, they get rid of this hide floating meeting controls, right? There you go. Then Matlab. There you go. Okay. All right. There is the so we call this commonly know when we're talking about this this thing, we always call it the pipeline. What's the pipeline? What's your pipeline? And what's the pipeline? By pipeline, You don't I the way I think about it. I don't only think about the sequence of operations. But it's really the term that we're using for the whole the whole project of organizing data, also documenting it, as well as specifying exactly what you did to the data at every single step. So it's not just a sequencing, it's also about data organization. And I also mean how we organize the data. What's the best print the best what's the word best best practices, right with respect to data organization. You, why is this important? Well, you need to do this because because of reproducibility, right? You want to be able to document what you did, you know, and then re replicate it. And This is something that's becoming increasingly standard in science, that you don't just write a paper and report, t test and here's a result. You have to document, you have to show the data. You have to maybe put down the script for what you did so that all the people can come in and say, yeah that's correct. Presumption of guilt. People want to Maybe that sounds right. It's a presumption of basically means that you want to be able to have people transparently see that that what you did is correct. That's actually very helpful for yourself as well. Now, here's the other thing. It's true that it's important to make things replicable by others. But actually, it's even more important to make it replicable to yourself. Because when you're running a big experiment, you're collecting tons and tons of data. We generating of files, files from the behavioral side, from the ED side, multiple cops, everything. You're working away on doing something and you might finish it, and then you put it aside. Maybe you did this half it through this project and you put it aside and you come back two months later and you're looking at what you did and you have no idea what the hell did I do here? I don't know what I did. I can't tell from my own service data organization. Actually, in my experience, it is often very very difficult to figure out what you did, what you yourself did. To me, it really I'm trying to establish work routines that makes it easy for me to come back to something and say, this is what I did. Now, let me continue from here and finish the job. I don't know. Any of you had the same same experiences? Right now. Exactly. And Nm, your data, you know, the I'm sure they're complicated enough. Not exactly the same as P data. But even if you have simple data, even if you have just a simple reaction time experiment with some behavior, some reaction times, you have to you want to put the files in some place where you know where they are, you want to know exactly what it means, and so on so forth. So it's really for yourself, almost more than for other people. Okay. So how do you ensure that you can replicate your own work? It's by documenting it also for yourself, not just first of all, you know, documented for other people in the lab, the people you work with, your collaborators, your advisor, but also primarily for yourself, so you can see what you did and then do it again. All right. So The one way. Okay, I'm just showing you how I'm doing it. I've been doing this differently over the years. Now I keep changing how I'm doing things. The principle really find a way to do things that makes it completely transparent and clear what it did. So I like my preference for this is to especially when you're running an ERP experiment is to organize the data into a folder structure, which is kind of hierarchically or linearly organized, right? Here I'm actually let's say I have So this is for the rotation experiment. I create a folder, which you call, the data. I call it rotation data. Make it easy to see my computer by capitalizing the flame and stands up. Inside that folder, I like to organize the data in the temporal order that they arrived at. So what's the temporal order of an experiment? Well, first, before you start the experiment, you have the actual code for the experiment, the EP in this case, ePrime program. So I'd like to start out with I have some folders I have folder I have the E prime program basically. So I can go back and look at exactly what did this experiment do? Actually, that's also very important for your own analysis routines. You need to You need to go and look at the actual experiment to interpret, you know, the coding of the trials, the stim, how the data are organized and so on. So I have always I always started with the E prime program folder. So what we have there, we have the Prim program and we have the stimuli, and another thing that I actually didn't put in here, another thing that's useful to do is to put the RV document somewhere. Sometimes you want to go back and look at your IRB document, or you want to, you know, The documents are on IB net, but it's also practical to have them readily available in your project folder, I think. You can always go back and then say resubmitted or something like that. I is part of the preparation for experiments, right? So the first folder the zero numbered folders are part of the not exactly. But the first folders are part of your preparation to run an experiment to run a study. Then Then I like to organize the data that's coming in into a linear hierarchy. What kind of data do we get in an ERP experiment? We get two kinds of data. We run the experiment on e prime on one computer and we collect the data on another computer. Both those two pieces of software running together generate files, and the e prime data files is part of the data that you get for the experiment. I always put the E data files in the folder in the beginning there. I can always go back and look at the behavioral data. The behavior data is not always used in an ERP exp, but sometimes but often they are. You want to keep that as part of your data. Another useful thing of the E data folder e data files is when you're an experiment, you get 20 e data files for each subject, merge them together and create a spreadsheet that contains all the, all the behavior data. That doesn't only contain the the eaction time, but it also creates a record of the design of the experiment. You can go back there and look at it and say, do I have the right structure in my stimuli? How many trials do you have in each independent variable, how many trials we delivered, how many do I have in the ERP file? V, very important to do that as well. Also, another thing is, you can use those merged DI file to generate a overview of all your subjects. You write a paper, you have to report how many subjects did you run what was the average, what was the distribution of gender and zone? And that should all be in that E data file collection. You should have that ready to generate that description of the subjects for the method section. Any questions. Tell me if this is all familiar. Okay. So I have that then we have the behavior data, the record of the behavior the e prime record of the experiment. And then we have the ED data. What do we get from the ED? While we record the raw EEG, right? To since you're using net, those are the net session files. Net station session files are basically each session file is actually a folder. Technically, it's a folder with many many subfolders and the data organized inside the subfolders. You don't really need to look inside those things. But that's called those folder structures are called MFF stand for multiple file format. I forget what that actually stands for. But that's the file that you get when you report when you run an experiment, right? That's right. Yeah. That's right. The native net format, it's called the session files. Those are folders, and then what we do then is we export them out of net because we want to transfer the data out of the netation world into the Matlab world basically. Something else I wanted to say there. I feel. Um The net station files the initial net station files that's the raw data format in that station. Now, you could actually do all this data analysis, data processing in that station. There's tools in the net station software that allows you to filter artifacts identify artifacts, baseline correct, segment, average, all those things. You can actually do on the net station side. But It's more practical, actually, logistically to liberate yourself from the actual physical machines that runs the collected data and then analyze the data in a more portable format. So Matlab allows you to take the data from those machines and run it and work the data on any computer that you have that that runs, you know, Mt lab and all the softwares. Okay. All right, so we recorded data and we get and then once we have the recorded data, we go and we export them. Actually, I didn't I didn't plan to talk about those things, but maybe that's that should be explained too. How many people have used waveform tools to export data yet? Ben so long. So if those people who took the seminar, they learned that you haven't done that yet, so that's okay. I'm going to skip over that part here. I'm going to assume that you know how to export the data from the s. Yeah. Can use R analyze. R is just statistics tool, statistic software. Once you get to that stage, that's when you already have well, depending on how the level of detail or sophistication you have it are, you know, once we get from the raw data, the goal of the data processing is to get from the raw data to some expression of the data that you can use as input to statistic software. And the format of that data structure is going to be the same, whether you use R, SPSS, Excel, and calculate statistics, et cetera. Yeah. So there is a package for doing the pre processing part I was developed by Schram in Germany and G. Much. Okay. It's really nice because it is a natively works the way that hard. Yeah. It may works like that. I was no suggests that we have a trial session like trying to house that because they be. I just don't know how it talks to the first bit of, like, I don't know if it talks native which in that station files. Well, there's so there's that tool. There's also a new tool from Sep remain EE. Okay. I'm Zoom impaired here, Zoom challenged. So I'm looking at Agana this computer is not there. What is one and two here. Something in R just because that would mean script. I know Matlab has scripts. I have no deficiency where I can't understand. So I like the idea of having something in R where I can just save the project tool. Yeah. So, there's tons of different software to use. Of course, Steve Lock ERP lab, which is actually more widely used than what I'm using EP EP tool. And of course, I think if if I were, I might probably want to learn to use the tools that are more widely used maybe like ERP b. That's for sure. So b I don't think we know we didn't use it. So the box is an extension of that number. But look, Yeah. So there's lots and lots of different types of software to use. There's there's lots of different mat lab based software suites suits Sweeps. How do you say that word? S U ITE. Sweep. So I think you have what you have to do a a as a researcher, you have to find a tool to use that you like that you learn, that you know how it works, and then try to sort of develop some kind of routine for using the software in Throughout your work. I'm illustrating what we're doing with EP tool, but everything that we're doing in EP tool can be done in ERP lab. Some of it can be done EG lab, some of it can in other software packages like Limo, it's a single trial EG analysis software. I'm sure there's lots of different packages Python. There's What's the name of that I'm running out of lots of ways do things. The important thing is really to understand what you're doing, not necessarily which tool you're using. It's just like a t test. If you know how to run a t test, you can basically do it with pen and paper. Then once you know that, you can run the t test using R, you can run test using Excel, you can SSS, and so on and so forth. The actual tool you're using, it's a practicality. But you have to really understand the theory behind what you're doing. That's important. MNE, exactly, right? Yeah. Yeah. And there's this big Mat lab system for analyzing FMRI data and all the prison work and Yeah. Yeah. So at this is anecdote from my own professional life. When you're starting out with doing analyzing neural data like ERP data or probably the shortest same is true for MEG, FMRI. It's a very complex and huge amount of information there. And it's very easy to get lost in You should do this, do that? What's the right way to do things? Should I use this package, should use that package, and it can take all your time. It's a good idea to find a way to work that works for you and then stick with it so that you can actually be productive and spend all your time on methodological quandaries. Yeah. I be getting a little bit sized track now. Let's let's go back to the thing that we're actually doing with the data and keep in mind that the actual software you're using is secondary to what you're doing. What do we do with data? First of all, Again, going back to practicalities. You guys are going to do this yourself very soon. You recorded data, you get the nett session file, and then what you want to do is you want to export it to a format that you can import into Matlab. And that's the MF format. If you go to the net station computer, I can't show that here because this is not a MC. Go to the waveform tools and then find the MFF export and take your raw session file, dump it into the waveform tool input folder. It run and then you get MFF file. What we're doing right now, for example, we have the two experiments rotation experiment and the experiment, and as we collect data, we export each session file to MFF, and then we put it in a folder structure that looks like this. This is a is my own folder I'm putting on my own on my own work environment. That's why you haven't seen that before. I take all the MFF files. I dump them in that folder. Again, they're you very practical here. Dump them in the folder, and then I run then I read them in to the EP tool box format. EP tool is one Mata toolbox that's developed by Jo Dean done at the University of Maryland. It does everything that all the other toolboxes does, it does filtering, it can do segmentation. You can do re referencing, baseline correction, averaging, all the things that you're doing in all in all the other softwares. I'd like to use it because it's actually specialized for running PCA and ICA analysis of the ERP data. O of one of the big problems I got into when I started doing ERPs was, I kept asking myself, how do I select my time windows of analysis? How do select my electrode regions? Should I go by what someone else said? Should I go by what I'm looking at in the data? You end up in this with these questions, which are valid questions, but they can be very time consuming and nonproductive actually. So I just discovered it. Okay, I can use PCA, this PCA technique to identify the portions of the time window where there are common activity, common activities. So maybe from 100-300 millisecond. During that time range, there is some activity that's specific to that time period, right? And then the PCA helps me to identify that without based on based on based on the data analysis of the data, based on what I'm seeing, not based on what Jacob set to do, but it's given to me by the data itself. COIs Okay. So that's why I like the PCA. So I ended up using the toolbox because it's designed to produce those kind of analysis. You can also, EG levels have ICA, but it's based on a different way of organizing the data. You got to find a workflow that works for you that fits with your own understanding and what you want to do. Okay. Let's go back to the What I can do here, I can teach you how EP toolbox works. Once you know how to do things in EP toolbox, you understand what you're doing. Once you understand what you're doing, you understand operations. You can take that understanding and then do that in other toolboxes as well. O pieces of software. Let me go back to the Here we go. I get all my MFF s files in one folder, and then next thing I do, I import them to EP, EP native format. Let me just show you quickly how to do every single step. Okay. So here is M lab. That's b. Okay. I'm just so you just Mm. Yeah, that's okay. So EP tool is using field trip. It's using E G lab functionality. I'm just starting it up in advance to set the file paths to point at the right directories for EP tool. That's the reason res starting up these things in advance. So EP So the first thing I do is actually me Let me just re quickly. The first thing I do I import the data into EP tool. I say them as native files in EPT. Next thing I do, I five p filter the continuous data. And then I segment it, and then I artifact correct it. Then I re reference it, baseline corrected, and then I average it. That's basically the sequence of things. A lot of things. Here it is. Here's the pipeline summary. What is that? Oh, Oh, it's a spider. Okay. Okay. So if you open EP tool, and look at the menu, these are the names of the different menu in EP tool, right? There's a read menu. There's trans There's a transform button. There's a segment button, pre processed, transform again. This is basically pipeline. First, I import the continuous 64 EGI files in ser files, and I say them as EPT files. Here's another important thing. You can do all of the steps. You can put all your folders in one big directory and then do all the steps, all these operations on the files, and you have one big folder with tons and tons of files. But sometimes I do that too. But I find that it's helpful for your own sanity, for my own sanity. T to not put everything in one big folder. I to here's what I do. I like to put a series of sets data. And then I put each output a one set into a separate folder. So here's here's my files. Here's my corresponding EP file. Seas. I run a high pass filter on this file and I put the output of that in a separate folder next folder. So here's my high pass filter data. Tell you second there. Then the next seg. I segment all the hyp and I put the out of segmentation into another fold regain. There's a seg there. Then once I have that, then I'm ready to run artifact rection. I run artifactorction, and I put the result of the artifactorction folder again. The next step, or next sep, next. Why do you think I'm doing it this way? Well, sometimes when you're working with, you did you have a you start out with some preconceived idea of what you're going to do. You run that process, then you realize, Oh, I should have done something different at step. So you want to go back, sometimes you want to go back and change a parameter in your processing file for some reason. The best and the easiest way to find the files to do that is that to organize the files separate folders like this. For example, suppose that has built to the data and then and then you realize, I should have used ero pot three hertz is, I should have used er p one Hz is. Then you can go back and then re run that step on those files in the separate directory. Just keep things in order for yourself. Keep your ducks in a row. Somehow. Again, here's the process. I import the files. Next thing I do, I use to transform function. I high pass each file with this is what I'm doing for the rotation s. I high pass each file with a 0.1 hertz two pass FIR filter. There's other filters you can use there. This is one I'm using now. Once I've done that, I then segment the data. Let me actually show you, let's see. Find that. Here we are. I'm going to go to my directory, somewhere here. Let's see. How do that. Second. Drop, rotate. There. I'll tie that it, right. It's kind of hard to do this. So here's how it looks. Here's all the data that comes out of that station. And for example. Now I can see, for example, how many subjects do I have? Well, we now have 29 subjects. There's one missing subject. Subject number 12 is not there. So I put in a little explanation for myself. Why is there no subject 12? Let's see what it says. I forget open. Where is it? Oh, there is. This one. So get an e mail from Jacob. I got an e mail Jacob saying just up 12 this morning. We were fortunately able to collect EG data. The subject here was so tick, right? So that's the explanation. Okay. So now I know. That's why there's no 12, right? So to help myself, not go running looking for 12 in the lib or something. Where is that data? This is really, this is for yourself for your own good. That's why I'm doing this. There we are. Let me just very briefly illustrate the read, so you hit the read. You go and select the MFF folder file. People who are working on the Yoda experiment, I'm going to suggest that you do this with the Yoda experiment. Once you go to your file and then you select MFF, and then you select the file type, it's going to be continuous. It's a continuous file. Actually doesn't matter which montage information you put there because that will be read in by the file itself. But if you're very nervous or superstitious, you can always select the hydro cell 64 channel data. You tend to get superstitious after a while when you're work in the lab. If you just use the default, it will still read the correct montage from the data. That's right. You can use this one too. Let's see if I can change my screen settings a bit. Okay. It's kind of tiny. I know, so sorry about that. I'm sure why how to get bigger. So then you just read the file. Let's go and read the file. Where are they? Were they are MFS session files. So file here? Let's read it in. And it will then you can follow along what's going on here. So this is actually using EG b functions, the MF import and EG lab. This will take a while, and then once it's read, you just save it as an EPT file. I think I'm going to I'm going to let this run through because it takes a long time. Let's call it Kid Kill that one. Once you have it in, Yeah, lots of things there. You just go to Maine and then you click on save and you click on the file that was imported and then saves that as an EPT file. So that's what I did here. I took all these files and I save them as EPT files, right? There they are. Is the same as the MFF file?Expt it's an EPT format. Again, it's nice to keep those things in a separate folder, so now here here's that intermediate stage of data processing. Right. You can always go back and then fix something there if there was something that went wrong with a single subway for example. Next thing, what's the next thing I do? Let me check my pipeline. I then high pass filter each file or with a spin 1 hertz finite impulse response filter. Let me just briefly show you how to do that. Again, you go Let me pecto main and you hit the transform and in the transform part menu. There we are. How can I make the resolution bigger, Jacob you? Sure. Appearances Built in displays uses mirrors. Is a little bit bigger? Is that better? Not really. I just see what I look at this Ox. On the setting for which? The side, this one S for what? Excises. Here we go. One broke. L et's just try to see what's going on. Yeah. So, what am I doing now, I hypas filter. When you go to the transformed data menu, you can do lots of things. You can actually re reference the data, you can baseline corrected, and so on so forth. I don't want to do any of those things at this stage. I only want to hypas filter data. So keep a clean separation between each operation that I'm doing. I can go back and check it again later. I basically say, I don't want to do any re referencing, so I set that to none. I don't want to do any baseline correction because it's continuous data. There's nothing to baseline correct relative to. Basin correct the entire file. I set all of those is zero. Then I just select my high pass filter. That's all I'm doing at this step. I'm doing a high pass two pass finite pulse response filter, and I set at 0.1 hertz. Cannot even be seen. Barely. I apologize. Then you just hit transform and you run that and then what I do? I go to my folder with the EPT files, and I run that on all of these folders. I basically select all of those things and then run as a big batch job. Then we will filter all those files, and then I pick result of that and put it in a new folder. That's what I did here. Already did at home. Let me show you, here it is. Here's the high pass filter data. That adds a bit and underscore this is a filter file. I put here. I put a little read me again. What does it say? Oh, it's RTF format, so I can't read it in that lab. It's just a little file it says, I use this particular filter setting and this and that so that you can go back and see. You're basically telling yourself, I use this filter setting. Let see. There is I can open that. Yeah. I high pass each file with 0.1 rd, two pi. No references, just to keep the steps discrete and separable. No prestin baseline values as this is continued data. Yeah. This is my tip. Add these little messages to yourself so you can see what did I do, right. What's the next step in the pipeline? Let's go to my full presentation. Now I'm going to do segmentation. All right. So the segmentation there's a segmentation tool in EP tool that I'm not going to explain in detail right now. But I'm going to show it to you. So there's the segment. There's menu item. I already created a segmentation tool. Maybe we can do a separate session on how to create segmentation tools in E tool. But it's basically the same logic as you have in the waveform tools in the station. Let's see. Um K rotation rotation data, segmentation, Yeah. Segmentation table. For some reason, I can't open it. So I wrote myself a little message saying, Okay, this is how I segmented data. This is how I set up this is the parameters for a segmentation. So what do you need to specify there? I need to specify what's the baseline period you want to keep? So I'm using 200 millisecond baseline period then you specify how long is the segment. Maybe I'm using 800 milliseconds. Why can't I open this. Let me see. I want to look at it. Let me try again. Show me finder. Okay. Research. Rotation. No, not la. Rotation data. Here we go. Okay. There is, right. So I wrote myself. This experiment we started running it and I discovered after we ran a few subjects that we should have added a tracking code to the experiment to help us with the segmentation. I had to write a separate segmentation tool for the first three subjects. I wrote myself a little explanation of that. My first subjects don't have the speech type independent variable codes, so it needs to be segment a different segmentation tool. For the rest, use this tool. I'm basically telling myself, if I want to go back and redo this, I need to do it something different for the first subjects. Again, this is just like documentation to myself. When you're doing the segmentation, of course, you have to know exactly what your design is. What's the design of this particular experiment? In this particular experiment, we have a two by three designs. We have two types of speech type, actual natural speech and then there's a rotator speech, the weired one. For each of those speech types, there's three levels of the independent variable of stimulus type. There's three stimulus types, there's a deviant, There's a standard and then something else we call the background standards. Basically, this is an experiment a misma experiment. We're presenting frequency distribution of standards, like a cloud of standards. That's normally distributed, with respect to an acoustic feature. They form a frequency distribution, and then we have some deviant ball stimulus that outside of that distribution. So that's the deviant. We have two different coatings of the standard. We can either compare the deviants to just a subset of the standards, the ones that are in the mean of that frequency distribution or all of them. We coding for all three levels here. When I run the segmentation, I really want to segment data into every single stell of your experiment. You can then always later combine those cells into main effects and combinations of cells. Yeah. Here's my segmentation. I have I Also for your segmentation, you have to be very very clear, what's your design, what are the within subject variables, what are the between subject variables. In this experiment, we have the type of sound that's the deviant is between subject variable and within subject variables are the speech type and the difference between standards and deviants. We can maybe we can do a separate little thing about segmentation tools for the experiment. Where am I in my pipeline? Let me go back to my pipeline. Okay. There we are. I run the segmentation and then I run the segmentation tool and I put all the segmented files in another folder again. Again, why do I do this? Well, maybe I discovered, I want to segment it a little bit differently. I want to use differ coding. Maybe I made a mistake. I want to change the pre baseline. So I can go back and do that only on that subset of the data. All right. Okay. So what do I end up with? I end up with now the show you again. Open this thing. Rock finder. Okay. Sorry. Rotates. Yeah. So here's all my segmented file, right? You see now that there's a row file, it's filtered and now it's segmented, right? I'm keeping different versions of my ducks in different rows. Very important. Once I segment it, now I'm ready to run artifact correction. Artifact correction is a complex algorithm that does a lot of different things. It identifies eyes, it identifies cod movements. It decomposes the data into independent components, corresponding to ils moments, the cod potentials, and then removes them and replaces them with interpreted data, It's a signs to itself. How do we do that? We go to the main and we hit the pre process button here. Okay. In the pre process button, you have to specify about the baseline period in sample points. If you have two and millsecond baseline, you specify that one to 50. Can you see that? And. Next time, I'm going to figure out how to make a high resolution. Then I just basically accept what I'm doing here is that I'm just using the default settings for the artifact correction tool because it does everything I want to do. It does eye blink subtraction. It does bad channel replacement, so it identifies bad channels, channels that are going wire for some reason or other, and then deletes the data from those bad channels and replace them with spline interpolations of the surrounding channels. I have something that fixes movement artifacts, I have something that fix sacad potentials, and so on and so forth. Actually, this is an algorithm called a multi multi correction algorithm, which is a combination of many many different things. Again, if you have done any ERP analysis before, you've done bits and pieces of this in different pieces of software. Maybe you're using EG lab for idling subtraction, maybe you're using some other things and some other system that you're using. This is like a state of the art type of artifact correction. I'm using it. I'll show you some of the results, what that looks like when you run artifact corction, um, so for every single subject, you'll get I got some tech help. I tech help. That's just go back a little bit. This is the maximum Sorry. This is marking TV. It's mark if it comes back to the right. Yeah, exactly. Wait for this. Next week, we have Asia doing the MMM about the MMS studies. In two weeks. F the rotated data, it would be good if we all came in with the data up to the point of the high pass filter. Then we can go over creating segmentation that. Sure. And for you two, if you are interested in getting you set up with EP lab and all that, I can, otherwise, we'll learn how to do all of the next semester, so to this point. It's the rotations on the Google Drive and the rotation, and there's a folder structure there with where you can find all the MFF files. So I don't partial partial of it because I'm so curious about how I'm sitting there look at the data. So I don't on my own. Do you need to go to the old process? Not necessarily. No. I can take my process and dump it on the Google Drive, for example, so you can look at it there. But the goal is really is, you got to learn how to do this. So this is such a a interesting. In your artifact correction, it goes through lots of different steps. So it shows actually how it gradually improves or gradually cleans the data, right? So it starts out with the row data. It's not baseline corrected as far as all the pace. And then basin css. We have to specify in arfction, what the baseline era is going to be. So the baseline corrects the data before artifact correction. And so there's baseline corrected, then it identifies pad channels. I did find channels. Then it shows you again the data. After the els, then he finds psychotic spike potentials. If you look in your report, the subject you can see those spike potentials in the continuous data. It's a little tiny little or spikes. Going like that. That's the eyes going back and These are the spike pensions are identified and then identified with ICA and subtracted from the data. This is how looks like without the spike pensions, you can. Then it just a psych movement. That's movement artifact spent. Is a different kind of artifact. This is related to movements. Like that. There they are. Subtract them and then you put together data without psycho looks like this. Is the same. Here is the thing. I I identify eye blinks with ICA. They are very, very big and spot actually there, so here are all the ilinks for the subject. Then This theta is the combination of the ibn there and the non iin there. Then it removes there and the pos again again without is. That's what it looks like there. Looks a little bit better. And then it goes fines movement artifacts, which is you're moving your head and get this big shifts in the data. That's identified and subtracted and now you've done that. Then you go and look for EMG artifacts. Elle electrical contractions. And there they are. So now you can see that you move from a very ice data to mom data or subtracting out all the artifact. And finally, it tries to identify channels. Channels that are maybe when you started recording, you had the electron with a very high impedance on it. And identify and subtract it and then you get end with the clean data here. It's very It's satisfying to look at how your data gets cleaner and cleaner. Okay. Once you've done the artifact correction, then the last step is to then re reference data and then baseline correct it, and then you also go back to the transform button here. Then you're selecting your new reference. Maybe I want to average referencing, so I select average referencing. And then you specify your baseline period. The data starts at -200 millisicon. The baseline is I forget exactly how to specify it. I have to go back and look at my own notes to myself again. Here you specify the baseline period and then you re baseline corrected data. L ast step, and I got it stop now because I have another meeting at 11. Last step is send you average the data. Now I have all my single subjects. They're all cleaned up, and then I go and do the averaging. And hit the average button. W. That takes all the single subject data and puts it into one big data file, which is the average of every single subject combined into a collection. So that's it. So, I'm sorry, I got to stop because I have another meeting. People are going to be mad at me if I don't come to the meeting. So that's basically the quick pipeline review for this for experiments. Yeah. Okay. So I think the next thing we're going to do is What I would suggest is that I'll I'll share all this information with you. I'll give you the PowerPoint. I'd like to get the people who are involved with the Yoda experiment to actually start doing this process with the YODA experiment. Yo experiment actually we're looking at the ERPs in two different place and sentences, both at the object position and the sentence final position, so you need to create two different segmentation schemes for those data. Basically create two data sets, one for the object, one for the word. That's what we need to do. All right. Thank you guys. Thank you for patience with the Oh. Sati. Hi, Sathi Hello. We're trying to switch to your meeting now. Okay. I was confused. Why are they coming to Why are they there? Nada Sathi you give me a couple of minutes? I'll just remove myself from this meeting and then come to a room. Okay. But you talking about the room a physical room or the Zoom room? Do you see all the people? Physical room. Okay. S. Okay. So give me like a minute. I'm going to just take my computer with me and then go into the secret room. Yeah. Okay. You're you're still recording your meeting. You should probably stop recording. That's. That's a good idea. Yeah. All right. Okay. So how do I stop recording? Hello. Maybe I can just stay here and if everybody Everybody's leaving. We'll just stay here. You can words. Shut the door. Okay. All right. Thanks. I hope hope we continue again. I'll send you the final version of everything. Okay, is that background you have there? These are the mass collection that Kyle has. I'm in Amos today. Oh. Looks very kind of scary. Yeah. So. Thank you for coming back to back meetings. It's a crazy thing. Let me find my PowerPoint for for this meeting. I think you're still recording. I don't know if you need to go to record and then hit stop or pause. Yeah. Or the menu at the top? Yeah. There might be a way to do that. More. There we go. Stop recording. There you go. Okay.
EPL lab meeting on PIPELINE, Oct 25 2024
From Arild Hestvik October 25, 2024
0 plays
0
0 comments
0
You unliked the media.
Zoom Recording ID: 2280265576
UUID: g4cRUOdEQD6rnRP+NJKxmQ==
Meeting Time: 2024-10-25 02:02:27pmGMT
- Tags
- Appears In
Link to Media Page
Loading