CS295J/Project Schedule:Caroline/Fields Interview

From VrlWiki
Revision as of 20:34, 3 November 2011 by Caroline Ziemkiewicz (talk | contribs) (New page: What's the data? What do you want out of it? Q's: language processing psycholinguistic questions how social/emotional factors affect language processing ERP is good for temporal sensiti...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

What's the data? What do you want out of it?

Q's: language processing psycholinguistic questions how social/emotional factors affect language processing

ERP is good for temporal sensitivity - can go as far as millisecond level interested in which *stages* of processing are being affected

MRI and MEG (magnetic encephalography) - similar to EEG MEG differences: skull and scalp less opaque to magnetic imaging

combining data sources is difficult complex math involved mostly gathered and analyzed separately; Phil Holcomb at Tufts working on project to bring data together, but on a long time scale

they mostly gather EEG data at 5ms intervals; higher-res is possible, but not helpful

-- experiment room --

experiment: sentence comprehension task, varying a single word between positive, negative, and neutral across two conditions (sentence in third person or second person) do people process the emotional stimulus differently when sentence is about them vs. other person?


output they see during experiment; waves from all the different electrodes (N? about 20) at 1.2s time scale they'll cut up signal based on events later

waveform is very noisy, so you need a lot of trials - 30 events for each condition ERP (event related potential) - way to analyze EEG

need to watch output during experiment to prevent bad data - e.g. artifacts (like blinks) can tell participants to stop blinking if it keeps happening!

output is a distribution of activation ("smear") center of distribution is not necessarily the actual location in the brain... e.g. N400 response shows up on the opposite side of the skull from the brain area that generates it inverse problem - infinite # of brain areas where a signal could have originated want to localize within % of probability

combining this data with MRI or MEG can add spatial information (but again, hard) distribution at least tells you if two events are happening in different areas or not

-- data processing and analysis --

lots of script files for data collection and pre-processing

events are coded according to a coding scheme for study events - just an excel file

BDF: bins of interest select out time slices that represent events categorize events based on the study conditions and other slices of interest script generates the bins (so each bin is a collection of small time-slices of waveform data around an experimental event)

scripts are just straight text, and they all seem to be processed in homebrew software made by a senior researcher in the lab lots of different processing apps; users just drag and drop script files onto the appropriate executable.

bin generation outputs count of bins, events and .... what's that word supposed to be? wives? elves? useful for error checking

RawView -> viewer for the raw waveform, also used for error checking sample events semi-automatic artifact detection (e.g. blinks) researcher checks the automatic output for accuracy and changes thresholds as needed. usually do need to change something could do it all by hand, but time and consistency are problems they prefer false negatives to false positives; better to throw out good data than keep bad (similar to CBDM researchers) artifacts cause really strong signals, so they can throw off the analysis by a lot checks errors by viewing the raw waveforms eric seems to have an intuitive sense of what good and bad wave patterns look like

this process takes 10-15 minutes per participant (varies)

another script generates averages across each bin check data again, and if bins seem small, return to rawview and filter on bin of interest want to keep the number of events in each bin over 30+ or N gets too small for confidence so if the number gets small you want to make sure no good data is getting thrown out and adjust the threshold again if you can

CalNormalize -> calibrate waveforms by a wave of known characteristics, then scale everything bandpass and lowpass filter to remove noise fourier taking out everything over 15hz

(again, these are all separate executables and he's just dragging and dropping script files on them)

that's the end of pre-processing, now you actually look at the data...

ERPView

this view shows the waveforms from each electrode, arranged in a roughly spatially-accurate formation average waveform taken across participants you can look at one bin or split it into multiple bins (e.g. conditions) per electrode looking for components - common patterns in the waveform, associated with a type of brain activity e.g. N400 -> associated with semantic processing P600 -> late positivity, associated with deeper encoding

components have been discovered over the history of EEG research not always 100% sure this component is a known component you're comparing it to... need to use distribution to check certain components associated with particular electrode locations

also focus in and subtract two conditions to see difference between them generate contour maps (rainbows kill people!!) over a head-shaped glyph - show activation distribution can be misleading to novices, since activation location doesn't really correspond to brain areas all this information can help confirm identity of a component


eric has trouble explaining what the contour maps are for helps to connect spatial and temporal... but again, spatial is kind of misleading separate effects across conditions

can look at individuals or averages can switch channels

ERPManipulate -> more processing, missed something here

ERPMeasure -> converts all the data to numbers for stats analysis components based on time and distribution scale by a baseline

lots of raw text data script file within SPSS used to process text data

need to divide up scalp spatially don't want to treat every electrode as a factor in the ANOVA but location at some level should be a factor need a coarser spatial organization but no standard way to divide up the head, you just make up a scheme and argue for it in the paper regions analysis vs. column analysis

VBA script does SPSS processing so it's all invisible to the researcher extracts an organized excel file just lists significant effects, then followup tests to each effect highlight shade to show significance level

time windows needs to be adapted per analysis

dependent variable is either average amplitude over time range or peak amplitude (avg amplitude is better, peak in an average is problematic) also look at latency

do this whole process after each participant, then more analysis when all the data is in go through the data lots of different ways figure out time windows, scalp divisions

statistics is confirmatory; the bulk of the analysis is visual