Monday, June 20, 2016

SAS Programming package for Sublime Text is now available via Package Control

Just a quick note to convey the information in the title there.  Thanks to some lovely help from the community (especially @friedegg, @seemack and @bobfourie) the SAS package for Sublime is now more functional than ever, and is finally installable via the wonderful Package Control.

So if the previous, janky installation procedure has thus far kept you from trying out Sublime for SAS programming, do please give it a whirl.

Friday, February 26, 2016

Sublime Text, Intel Graphics and CTRL-ALT-ARROW hotkeys

Just in case this saves someone else (or future me) some head-scratching.

Work issued me a new laptop, onto which I promptly installed ST3.  When I first hit ctrl-alt-down arrow to go into column select mode my display turned upside down.  I've seen this dozens of timesby now--this is the graphics card add-on program making it easy to rotate my display (though why 180 degrees is a useful rotation that they want to serve is unclear).  I hit ctrl-alt-up arrow to put things back to rights, and then go into the system area to look for the (in my case Intel) icon to turn off its hot-keys.

But now I go back into ST3 and those key combos don't work at all.  Display stays properly oriented, but my sublime cursor moves noplace.

Is ST3 even receiving those keypresses?  I open its console with ctrl-` and type "sublime.log_input(True)" to turn on key event logging.  Nada, zip, nothing--something is completely eating those keypreses.  I google for a bit (mostly finding posts from people who have inadvertently rotated their displays and don't know how to un-rotate them) and didn't find anything useful.

So I pull up the full Intel HD Graphics Control Panel app and go into the hotkeys section:
For a goof, I tried re-defining the rotate-y hotkey combinations from ctrl-alt-::something:: to shift-alt-::something::.

I enabled hot-keys & tried sublime again.  That worked.  Then I disabled hot-keys and it still worked.  So I'm calling it a fix.  This seems like a bug in Intel's utility though--it seems to be eating keystrokes that it's been told to ignore.

Tuesday, November 10, 2015

How we do Quality Assurance for the VDW

Last weekend I attended the most excellent PCORI Data Quality Code-a-thon, hosted by Michael Kahn and his colleagues over at University of Colorado, at which I met some really interesting and smart people doing really interesting work.  A couple of them evinced an interest in VDW QA work and I said I'd share the substantive checks that we are doing.

Some Context

This is volunteer work

Like most everything VDW, QA work is largely unfunded and distributed across implementing sites.  Volunteers from the data area workgroups (e.g., Utilization, Pharmacy, Enrollment) put together lists of checks pertaining mostly to their data areas, write VDW programs that implement the checks & periodically (generally annually, but sometimes more frequently) make a formal request that implementing sites run the code & submit their results to the program author(s) for collation & reporting out to the VDW Implementation Group.

One big implication of this is that our approach is not nearly as coordinated as an outside user might expect.  I'd like to say that we are evolving toward a common approach, and we do have a new(ish) "Cross-File QA" group that's taking on meta-standards for QA work, but there is definitely a long way to go before this is uniform enough to be coherent to anyone not familiar with the history.

QA Is Multi-purpose

We generally try to kill 2 birds with our QA stones.  Primarily we want to characterize the quality of our implementations for ourselves, each other, and our user community.  But we also love it when our reports are useful to Investigators writing grant applications, who sometimes need to brag about  e.g., how many person/years worth of data we have across the network for people with a drug benefit.

This can be a slippery slope, on occasion leading individual sites to declare that a given measure has strayed from QA (which is generally exempt from IRB approval) into substantive research territory, or else exposes what should be proprietary information.  One example that comes to mind on Enrollment was a measure of churn--e.g., in a typical month, how many enrollees does a site tend to lose to disenrollment, and how many do they pick up?  It's a constant dance/negotiation.

Roy's QA Prejudices

To my way of thinking the best QA:
  1. Enables implementers to find (and fix) their own errors first, before exposing them to any larger audience.  This is a matter of professional courtesy.
  2. Includes as many objective checks as are practical to implement in code, and presents the running user with:
    1. A clear list of what the checks are
    2. What the tolerance is for those checks (e.g., 3% of your patient language records can have nonstandard values, but any more than 5% and we're going to say you failed the check).
    3. Whether the file passed or failed each check.
  3. More general descriptives characterizing the amount and predominant values in the data.  These are often most useful when viewed as part of collated output so you can compare sites.
  4. Collated quality/descriptive reports 
    1. should be readily available to the user community (we have them up on the HCSRN web portal, behind the password-protected area).
    2. should be easily updated (completely automatically if possible) so that implementers are incentivized to fix whatever issues they can as soon as possible (and get credit for doing so).
Following the lead of the Utilization (encounters) workgroup, we generally refer to the objective checks as "Tier 1" checks and the descriptives as "Tier 2".  Like most things, the checks are a matter of negotiation within the workgroup.  I've come to think of them as crucial adjuncts to the specs themselves because they sometimes reveal reasonable disagreements on how to interpret the specs.

The Checks


Tier 1

  1. All variables listed in the spec exist and are the proper type (character or numeric).  I don't personally like to check length and so don't, though there is diversity of opinion on that & so some QA programs do.  There is no tolerance on these checks--any missing variable is a fail.
  2. For those variables that have an enumerated domain of permissible values (pretty much everything but MRN and birth_date) that those are the only values found.  If > 2% of the values found are off-spec we issue a warning.  At 5% or greater we fail the check.
  3. MRN is unique.  Zero tolerance here.

Tier 2

  1. Counts of records (not just current enrollees) by
    1. Gender
    2. Race
    3. Ethnicity
    4. Age group
    5. Need for an interpreter
  2. Counts of enrollees over time by those same variables.


Tier 1

  1. All variables listed in the spec exist and are the proper type (character or numeric). Here again zero tolerance.
  2. For those variables that have an enumerated domain of permissible values (all but MRN, the start/stop dates, PCP and PCC) that those are the only values found.  Same tolerance here as with demog-- > 2% is a warn, > 5% a fail.
  3. Enr_end must fall after enr_start.  Zero tolerance.
  4. Enr_end must not be in the future.  Warn at 1%, fail at 3%.
  5. At least one of the plan type flags (which say whether the person/period was enrolled in a PPO, HMO, etc.) must be set.  Warn at 2% and fail at 4%.
  6. Ditto for the insurance type flags (e.g., commercial, medicare/caid, etc.).
  7. If any of the Medicare part insurance flags are set, the general Medicare insurance flag must be set. Zero tolerance.
  8. No period prior to 2006 should have the Medicare Part D flag set. 1% warn, 2% fail.
  9. If the Part D flag is set, the drugcov flag must also be set.
  10. If the high-deductible health plan flag is set, either the commercial or the private-pay flag must also be set.
  11. If any of the incomplete_* variable values is 'X' (not implemented) then all values must be 'X'.
(That last check refers to six variables too new to be listed in the publicly available specs.  They let implementers surface known problems with data capture, if there are any (for example at Group Health we only have tumor registry information on people who live in one of the seventeen WA State SEER counties).

Tier 2

  1. Counts and percents of enrollees over time by all substantive enrollment variables--plan types, insurance types, drug coverage, etc.
  2. Counts & percents over time by several demographic variables (listed above under demographics).
  3. Counts & percents of enrollees over time by whether the values in primary care clinic (PCC) and primary care physician (PCP) appear to be valid (that is, contain at least one non-space and non-zero character).

More Later

As I'm the primary author of the E/D checks, these were the easiest to hand.  I'll be back with at least some of the other checks the other groups have implemented.

Wednesday, May 21, 2014

Axis Problem

How can I get the 0 value on this graph's x-axis to stay left-justified, without using a VALUES = statement on the XAXIS?

The code for this is:

data gnu ;
    @1   clinic        $char5.
    @7   report_date   date9.
    @19  readmit_rate  3.1
  format report_date monyy7. ;
datalines ;
north 31may2012   0.8
north 30jun2012   0.2
north 31jul2012   0.3
west  31may2012   0.0
west  30jun2012   0.0
west  31jul2012   0.0
run ;

options orientation = landscape ;

ods html path = "c:\temp" (URL=NONE)
         body   = "deleteme.html"
         (title = "Axis Problems")

    proc sgplot data = gnu ;
      hbar report_date / response = readmit_rate ;
      by clinic ;
      xaxis grid /* values = (0 to .1 by .1) */ ;
      yaxis grid ;
    run ;

ods _all_ close ;

Friday, January 17, 2014

Why aren't these colors the same?

When I run:
proc format ;
  value $sub
    "bibbity" = "5"
    "bobbity" = "30"
    "boo" = "80"
    "baz" = "120"
    "foo" = "150"
    "zoob" = "180"
quit ;

data test ;
  do subject = 'bibbity', 'bobbity', 'boo', 'baz', 'foo' ;
    do obs_date = '01-jan-2010'd to '31-dec-2013'd by 30 ;
      num_widgets = input(put(subject, $sub.), best.) + floor(uniform(4) * 30) ;
      proportion_blue = uniform(4) ;
      output ;
    end ;
  end ;
  format obs_date mmddyy10. proportion_blue percent9.2 ;
run ;

%let out_folder = c:/temp/ ;

ods graphics / height = 6in width = 10in ;
ods html path = "&out_folder" (URL=NONE)
         body   = "deleteme.html"
         (title = "Why are bubble color and line color not coordinated?")

  proc sgplot data = test ;
    loess  x = obs_date y = num_widgets / group = subject ;
    bubble x = obs_date y = num_widgets size = proportion_blue / group = subject transparency=0.5 ;
    xaxis grid ;
    yaxis grid ;
  run ;

ods _all_ close ;

I get:

I think it's confusing that the bubble colors and the loess line colors don't match.

If I change from loess to a series plot, the colors match.

Anybody know how I can get the colors to match?


Sunday, November 10, 2013

Loving R's ggplot2!

So I'm messing around with R for a coursera course I'm taking, and totally loving the ggplot2 library.  Check out this lovely plot of some loan data we got from the course instructor:

 That's a scatterplot with loess smoother + confidence intervals, all done with this simple call:

qplot(x     = FICO
    , y     = ir
    , data  = loansData
    , color = Loan.Length
    , geom  = c('point', 'smooth')
    , ylab  = "Interest Rate"
    , xlab  = "FICO Score"

How awesome is that?

Saturday, September 21, 2013

Launchy + PowerShell = easy navigation between project folders

Like most programmers at GHRI, I do work in a multitude of different directories.  Different projects store their programs & data in different folders, and there are numerous different folders that that are important to my data infrastructure work.

When I'm called upon to navigate to these different folders I typically have to remember where they are & then 'cd' over to them (if I'm at a command line) or type into explorer's address bar one component at a time, waiting for the auto-complete (or attempting tab completion at the command line).  This can be cumbersome--especially when I'm not physically connected to the network.

At some point I decided to set some environment variables for myself so I could just type, e.g., %myproj% into the Run window or Explorer's address bar or an Open File dialog & be taken there instantly.  I found this very helpful--no more having to remember where things lived, just my nicknames for them.

Then after adopting powershell as my preferred command-line & discovering functions, I created a parallel set of functions that just did a 'cd' into the proper directory.

Then my machine was repaved & upgraded to Windows 7, and I lost my environment variables.  Around the same time I read a lifehacker article on the Launchy utility & decided to try that.  So rather than set up the environment vars I just created a special folder called Shortcuts into which I put shortcut files pointing to the various folders, named after my nicknames for the projects.  I like Launchy quite a bit, but I did miss my environment variables for the odd Open File dialog.

So today I decided to delve into powershell scripting a little so that I could put all the information in a script, and have it generate:
  • The environment variables I missed,
  • the ps functions I wanted, and
  • the Launchy shortcuts I wanted.
Here's what I came up with--it seems to work pretty well.

$WinShell = New-Object -comObject WScript.Shell
$shrt_dir = "C:\Users\Roy\Desktop\shortcuts"
# nicknames and locations of my projects
$projects = @{"grif"  = "\\some_server\griffin\stupid name" ;
              "cupid" = "\\other_server\projects\cupid" ;
              "prod"  = "\\data_server\management\programs\"

foreach($prj in $projects.GetEnumerator()) {
  # Create a shortcut named for the nickname that points to the dir in the value.
  $shrtfile = $shrt_dir + '\' + $prj.key + '.lnk'
  $shrt = $WinShell.CreateShortcut($shrtfile)
  $shrt.TargetPath        = $prj.value
  $shrt.WorkingDirectory  = $prj.value

  # Create an environment variable for each.
  [Environment]::SetEnvironmentVariable($prj.key, $prj.value, "User")

  # Create a function for each nickname
  $this_func = "function " + $prj.key + "() {Set-Location '" + $prj.value + "'}"
  Invoke-Expression $this_func

Next up I want to change my prompt function so that those project folders show up as e.g., ::cupid:: in the prompt rather than the whole long thing.  I'm already replacing $env:home with a '~', so I should just be able to loop through that $projects dictionary to make similar substitutions.