*********************** Superlatives **************************** Tower Slow uptime 2/24 - takedown on 5/10: 92.7% Tower Fast uptime 2/24 - takedown on 5/10: 93.8% Days Met City on generator power: 42 Litres of gas schlepped by Team ATMOS to Met City for he generator: ~1260; an additional ~200 in and ~400 out by helicopter Total distance traveled by Met City relative to the ship based on a 12 hour running mean: 1.4 km Minutes with air temp >= 0 C: 32 (Apr 19, 2020) Minutes with surface skin temperatures of 0 C: 45 to 90, depdening on location Maximum 10 min mean wind speed at 10 m height: 15.7 m/s Maximum air temp: +0.2 C (Apr 19, 2020 @ 10 m height) Minumum air temp: -42.3 C (Mar 4, 2020 @ 2 m height) *********************** Met City Installation changes **************************** 20200307 Licor taken down for heater fitting 20200311 Fish Cam down: on the wrong side of the lead: cable disconnected at Met Hut and sent to ship side of lead 20200311 Picarro shut off orderly 20200311 Noodle shut off at MC 17:34Z because the lead split it from MC 20200312 (or 20200313) AWI EC sled + mast taken down. This was located about 2-3 oclock from the tower for a side-by-side. GPS? Taneil said it was put up in mid-late February 20200312 Picarro removed from Met Hut (Met Hut cold and gennys cant provide the heat or power for pumps) 20200312 ARM disdrometer down. Rescued from lead. 20200317 Rocket Traps in their original position lost 20200317 ARM installations retreated. GPS? 20200321 Rocket Traps v2 installed about 10 oclock to the tower a far distance. GPS? 20200322 Sodar re-install begins on hummock between VISSS and Met Hut 20200402 Sodar running again 20200403 Rocket Traps v2 moved to nearby the lightpole position: 2 traps, highest about 1 m high with profile similar to power cable tripods. This position is within a meter of the old light post. GPS point taken 20200410 20200404 MC Lidar re-installed near OC/BT 20200410 Picarro returned to Met City and turned on 20200413 Removed the Creamean sampler's cable tripod that was set up in the Rocket Trap v2 position because it wasn't being used. Also removed the flag. 20200419 Picarro shut off at MC 20200426 VISSS taken down and returned to ship 20200510 0829Z Met City power shut off for take down *********************** ASFS in the CO **************************** 20200414 ASFS50 installed in BGC1 20200415 ASFS30 installed at Balloon Town nearby AWI EC sled (taken down 20200502) 20200416 SIMBA installed at BGC1 near ASFS50 20200502 ASFS30 removed from BT, staged near BGC1 for efoy testing 20200505 ASFS30 started data collection for EFOY testing in BGC road...not sciencey data! 20200507 1140Z ASFS30 repositioned into ASFS50 BGC1 location. FPB = ASFS50 FPA (not moved, just plugged it into the new station). FPA = FPA. 20200507 ASFS50 turned off and taken back to PS; staged in the log area for loading. *********************** Noodle stuff **************************** 20200311 Noodle shut off at MC 17:34Z because the lead split it from MC 20200410 Begin noodle install at OC 20200414 BGC1: Noodle completed 20200414 12:27Z - 20200502 06:45Z Noodle was running at BGC1 *********************** Met City Major Instrument Issues **************************** 20200122 last report from the p spar 20200310 FMI fans taken down 20200313-20200327 VISSS running with green lights, but no data is being saved 20200325 Power cycled (unsuccessful) AOFB per Stanton request because it was no longer reporting 20200329 VISSS Master error messages on boot up 20200319 0900 AOFB stops reporting. 20200410 unsuccessful attempt to scout with ROV 20200312-20200321 ICERAD fans dead (noticed 20200318) 20200321-202003xx FMI SPN1 cable being repaired, no measurements 20200404 VISSS Master is taken to the ship 20200404-20200412 VISSS Slave is run as Master while Master is repaired and data rescued, sent to MCS 20200418 Tim reports the AOFB46 (MC) is calling in once again! 20200419 VISSS and Picarro shut down orderly during power interuption 20200420 Picarro removed because hut no longer heated (genny power); VISSS turned back on 20200423 Photographed a gradual slope from a on the north side of the ASFS30 (BT) radiation. This is the cause of the offset USW which is messing with the albedo. 20200424 Confirmed that VISSS has a clock synching problem. It can't see the server with the radiolan. It can't see the tower because the subnets are different. It doesn't have a GPS. Clocks have drift be a few seconds. 20200428 VISSS installation on P-deck slowly begins 20200428 Radiometers have glaze ice. Can't get it off. 20200430 VISSS starts data collection on P-deck 20200504 David installs stakes; light disturbance near swingset and new posts (stakes) nearby too 20200505 0703Z NICs changed to DHCP and time synching works. very nice case study day 20200505 removed Stanton yellow box, AOFB46 auxilary power cable, dug out solar cells, performd SD card retrieval and collected 200 mb. archived AOFB46 data and spar/mapper data on Workstation 2020505 ARM installations guy lines dug out to ice anchors in prep for moving...some disturbance *********************** Met City Major Infrastructure Events **************************** 20200311 Data and power cables cut by lead. 20200314 Fuel drum installed as gas tank on 5.5 kW genny, power now continuous 20200323 CU RadioLAN (ASFS spare) installed on tower 2 m boom. Enough connection for RD and checks, but not enough to transfer the data 20200323 Oops, the drum has some diesel in it! No action taken. 20200324 Light cables shoveled out and removed 20200331 Ran power cables out to Met City from Droneville and across the Mast and Big Ridges 20200331 Generator died but was restarted - first power cycle since 20200314 20200401 Generator died overnight, another power cycle. started Hondas 20200401 Line power connected! Another power cycle 20200407 UPS installed. Power down for about an hour 20200402 Data Team recon'd for a fiber line but declined to install it because of the ice conditions 20200402 Reflagged Met City to manage traffic 20200402 Nearly swapped back over to gennys with leads opening under the power line, but stayed the course 20200419 Leads open force emergency disconnect of MC/DV power lines. 5.5kW yellow (drum) genny failed. Ran on Hondas overnight. Minimal data gap save Picarro and VISSS (shut down) 20200420 Lead situation unchanged. Returned to MC with gas and Tomas/Hannes who set up 1 Honda with connection to drum. Filled drum and got back to ship as storm came in. 20200422 The Honda appears to burn 1 lt per hour, which is better than the 1.5 lt per hour from the yellow genny 20200425 Checked Honda spark plugs (looks like new), air filter (clean), and oil (used but servicable for now). Getting mixed advice about how often to actually change the oil. 20200427-20200428 The infamous aerosol gap. Data gap overnight. Generator intentionally powered down to play nice in the sandbox with the aerosol teams. Major kerfluffle. 20200504 David installs stakes and does lidar scan 20200505 AOFB surgery (SD card), ARM starting excavations, power hub excavated, prep'd for 2nd fuel drum 20200506 Helicopter on site in the afternoon. Power hub removed, yellow genny removed. ARM take-down and fuel drum brought in. *********************** Met City Major Data Issues **************************** 20200323/4 Lost about 12 hours of Tower CR1000X data. Not sure what happened, but the hypothesis is that attempts to use the new RF with the tower->daq transfer scripts resulted in a file conflict when the transfer took too long to write an open file. Worse, though I tried I failed to backfill from from the logger's internal memory. There is something fuinny on that logger data allocations. 20200419ish BLSN attenuates all sonics. 20200420 ...is an example. Michael found duplicate CR1000X data in the tower files and clears it out in the python code. I looked into this and found that it occurs during power outages. I think what is happening is the improper shutdown of the tower cincoze is an improper shutdown of loggernet, which is actively retrieving data. Loggernet loses its most recent (active?) file pointer and when it boots up again it redownloads the last few minutes before the power off. 20200422 Discovered a clock sync error on the Noodle CR1000. The GPS is correct and the logger receives the message and passes it along. I think the cincoze doesnt receive it though because the cable isn't plugged in. However, it still has Andi's gps attached so the cincoze clock is ok. The logger clock was off: Server Date: 4/22/2020 2:55:24 PM, Station Date: 4/22/2020 2:51:16 PM. I don't know why the logger isn't updating its clock with the GPS - it is getting the data! I reset the clock at the time stated previously and also enabled synching to the cincoze at 1 s offsets. This seems like a loggernet bug. The error of minutes is constant based on the file write times and is probably tied to the logger's drift since it was last used in the autumn, but there could have been some additional drift since the Noodle was installed last week. Note that the V102 isn't doing anything important other than logging some gps data. The heading is there, but even through the GPS has been in the same spot, it isn't actually mounted to the tower so this is a bit unfinished. The cincoze gets its time with NOAAS using ndi's GPS. The logger should be getting the clock from the v102, but it isn't working properly. The fact that there are two GPS is an oversight: The first attempt at rebuilding the Noodle used Andi's GPS and the second used the v102, but I forgot Andi's was still there and also forgot to connect the v102 to the cincoze; two cancelling errors. 20200422 There is an issue with the logger clock at Met City too. This entire leg it has been 2 seconds ahead of the tower. No idea why. I tried setting it once to the cpu, but it just switched right back. Noted. 20200308 & 20200402 (cr1000x_tower_03092020_0000.dat, cr1000x_tower_04032020_0000.dat) hand edited. wtf? # - added verify_integrity=True in the pd.concat in get_logger_data. if the logger stamps a duplicate time FOR A UNIQUE ROW a duplicate will sneak into the df and python cant find it. you must hand edit this by interpolating in the correct times becasue I cant figure out how to do it automatically and the verify_integrity will still fail, but at least tell you where to look. i find 3 seconds in the whole tower data set. 20200520 Note that there is no data in the MCS under flux_tower_30m_leeds. Instead it is under flux_tower_12m_ucb in the Metek30m directory. This is related to the original data acquisition setup and was the precedent from Leg 1. The data acquisition completely decoupled in physical space and partially decoupled in data space for the BGC1 install. Thus, I added data for Leg 3 from the sonic to Metek30m but the slow data is no longer in the CR1000X daily files from the tower (as it originally was) but is instead in its own unique daily files in the CR1000_mast directory under flux_tower_12m_ucb. confusing? Yea. Reality diverging from MCS-world? Also yea. 20200530 LWD iridium has a couple problems today. First, I found that I did not record the IR20 case temps in the iridium files. Thsi was a major oversight so I uploaded a new program (yikes!!). Second, there are large spikes in the LWD data today to 340-350 Wm2 in very discrete jumps. The case temps are +7-8C!!! This is really warm and concerns me, though th LWU skin temps are only 0.5 C warmer than IRT so the bias is small. I think the internal heater is messing with things, but even if it warms the case it should not put greater than 1.5 wm2 gradient through the case. I think these are measureing a little warm. There is incidentally some comparison data from the couple days of overlap when asfs30 sat near asfs50 in the road. The LWD on asfs30 then would have been with the heater but with out the VEN while asfs50 nearby was with the heater and the VEN. The overlap data with the ven on and off is 5/6. caveat is that the asfs30 was tilted 4 deg. preliminarily, from a quick comparison between overlap when 30's ven was off and 50s was on and a time when both were on, the bias = +1.5 Wm2, which is the same as the heater and this explains the bais in the melting surface temp too. I am interpretting the warm LWD today as spurious and associated with snow (it is snowing at PS!) landing on the dome, melting and reaching 7-8 C, increasing LWD accordingly and then evaporating because it is 7-8C. *********************** Met City Major Events **************************** 20200402 Nice case study with a nearby lead opening upwind of Met City in the sampling sector. 20200415 Near melt! 20200419 Melt, major lead openings,open water no freezing, moderate southerlies calm overnight as low passes 20200420 Cooler (~ -8 C) some freezing of leads, shift to NW wind and high wind speeds, snow: licors and some sonics having attenuation problems which I think is from the snow 20200421 High winds and fresh snow = much BLSN. All the snow in the air is f%$#ing with the sonics, ASFS30 too and ASFS50 sonic stopped reporting entirely because of the loss of another path. Maybe recovers after storm? Unsure if this is caused by damage or attenuation. On the ASFS the 1 min data is heavily affected by averaging of NaNs but the fast data is more complete. 20200422 Interesting case of albedo today because it was so clear. The snow is not fresh; losts of scouring. The surface has shiny areas of refreeze from before the storm exposed by the winds during the storm. This includes under both ASFS. ASFS50 has normal albedo signal by ASFS30 USW is shifted to peak later in the day. I think it is an uneven surface causing this. The slope near 30 is too much perhaps. All ASFS radiometers confirmd clean and level this afternoon. 20200427-20200428 Opening again. Travel time up to 2 hours. 20200428 Freezing drizzle today. Temps -6 C, but rain-ish. Yeah...f*d with the radiometers 20200430-20200501 (I think) proper snowfall 20200513-14 major storm, msoaic record wind Bft10; massive leeds and ice breakup, camp destroyed 20200514-15 melt onset forecasted as warm under clear skies drawn north into area in lee of the big storm. so SW forcing after LW event is the hypothesis, but no melt or only very briefly occured 20200516 10Z first true fog after many many forecasts ********************** Handover Reminders ********************************** - VM needs to be renewed every 2 months with the Data Team. Probably due around now, actually... - Spare cincoze is at the noodle in a servicable logger box. 192.168.202.59 - Original cincoze has a new IP, 192.168.3.222 to make it play nice with the radiolan - How do I archive squirrel mail? - Container problems: - Leaks form in two spots in the ceiling (marked). Bow in the floor near the door because water under the floor (comes up at seam near UPS). - HTR-1 fan works but heat dead. Dave/Chris/Tom could not fix (broken relay?), but Tom says there is a way to double output from HTR-2. However, not needed because it is warm now. I just have HTR-1 off includng fan because it is so loud anyhow. - Lars, Steffi, Gina, Annette have the preliminary nc data set. I think we need to start a listserv of users and send emails when updates or something official becomes available. **************** Opening Closing ************************* 20200311 1040 - 20200312 0100: leads, open water