Friday, March 16, 2018

Elements of a Distributed Research Data Network

Elements of a Distributed Research Data Network

As preparation for a panel discussion at Health Datapalooza 2018 on distributed data models, I wanted to jot down some thoughts on what I think you need to have a successful distributed research network. (Possibly these things are beside the point of that panel, but I've been wanting to write this for a little while. So here it is.)

Research Networks

What I mean by a 'research network' is a group of data-having organizations who intend to offer data access to remote users via some interface, in order to answer research questions. For me the paradigmatic example is of course the HCSRN. That's the organization I'm most familiar with, having been in on its VDW effort from pretty much the beginning in 2002.

So what do you need for a distributed research data network?

A Data Model

A data model is generally a collection of specifications for the tables that are available in the interface you're offering to remote users. It communicates to your users:

  • What data they will find in your instance (e.g., lab results; insurance periods, pharmacy fills)
  • How to refer to the data—e.g., table and field names.
  • How the data is structured—what relationships the tables bear to one another.

At least as important, the data model should communicate to your implementers just what data has a place in the data model, what it needs to look like, and where in the scheme it should go.

To that end, good specifications are more than just those table and field names. They should also feature some verbiage that sets out what data does and does not belong in each table (e.g., the Race fields should hold the patient's self-reported race values only—do not use non-self-report sources for these fields). Optimally, they will also specify some natural key for the table—some combination of substantive fields that should uniquely identify a record (e.g., there should be one record per (MRN, RXDATE, NDC, and RXPROVIDER)). At some point this meta-description verges into full-on implementation guidelines, discussed below, but there should certainly be some general description to help users and implementers alike know what should be in the tables.

Important Adjuncts

Table specifications are the bare minimum you need for a successful DRDN. In addition to these, it is optimal to have:

  1. Objective QA tests
  2. Implementation guidelines
  3. An authority to construe the specifications and answer user and implementer questions.

That last element is potentially very expensive of course—it may involve paying knowlegable humans to respond to questions and make recommendations/answer questions. If your network is very successful and can gain many implementers, then it's possible that an active community will spring up around it sufficient to keep it going without money changing hands (in much the same way that happens with some open-source software). But it would be unwise to count on that happening very early—much better to seed the ground by starting out with a paid group.

As the VDW effort matured I came to really appreciate the existence of objective QA tests as a supplemental source of data specification. Just the exercise of agreeing on what checks you should have (apart from 'all fields should exist & be the proper type') is incredibly valuable for surfacing the frequently diverse understandings and assumptions of your implementers.

Extra Credit

In addition to the above, it would be lovely to also include a sample implementation from widely available reference data (e.g., Optum, Medicare Claims). The OMOP folks have put together this implementation of an ETL process that creates OMOP data from CMS' synthetic public use file, for example.

Likewise, since the healthcare and health research fields are both ever-changing, it's good to have a process by which the DM can be amended.

Finally, in addition to a human-readable specification of your data model, it's always nice to include one or more flavors of SQL DDL scripts that create an empty set of the tables that constitute your data model.

A Query Processing Engine

This is what your users will use to manipulate the data from afar. Because they are doing so "from afar" it is crucial to be as specific as possible as to what that environment is like. Examples include:

  • python 3.6 or later, with pyodbc 4+ and pandas v02+
  • R 3.4 or later
  • Oracle PL/SQL version 9i or later
  • SAS 9.3 or later
  • ::centrally created, possibly open-sourced, web app query tool with corresponding execution nodes at implementing sites::

For actual programming environments, where queries consist of human-generated, small batch free range artisinal code, you want to specify that environment in a lot of detail. Generally, you want to enable your users to create a similar environment locally (either with their local implementation of your data model, or from synthetic data) so they can formulate/test their queries before inflicting them on the network.

Opting for a custom web-based GUI application for specifying queries takes out a lot of the guesswork in terms of "will it run at the sites?" but of course that's at the cost of having to create said GUI application, and the sophistication of what users will be able to do.

A Registry of Implementations

Having some central clearinghouse where implementers can register that they have implemented your data model, are open for business, and set expectations in terms of what queries they're willing to entertain is crucial. The popularity of a data model comes from how many organizations have implemented it. This is also a great place for implementers to set expectations in terms of:

  • The extents-in-time of their data holdings.
  • The date they last ran the QA tests, and their results.
  • What QA issues are currently known, and whether they are fixable/in process of being fixed.
  • What specific versions of the query engine software they have available.
  • What hoops there are to jump through in order to get a response—e.g., data use agreements, fees, approvals, applications, etc.

On the HCSRN we have a portal website that we use to document these things.

An Implementing Community

If nobody implements your data model, you don't have a network. And as soon as more than one org implements, you have the problem of ensuring that different implementers are interpreting those specifications in something like the same way. In a lot of ways, specifications are like legislated laws—they usually sound logical, coherent, and reasonable when they are formulated, in the abstract, with particular use-cases in mind. But just like we need judges to construe those laws in order to apply them to specific situations (often disturbingly different from those anticipated when the law/specification was first written) we need at a minimum, a way to arrive at consensus regarding how specific decisions should be made.

Arriving at consensus means having a conversation, be it live at meetings, or on conference calls, or asynchronously in e-mail for on forum threads. The means for having those conversations are what I mean by Community.


No need to belabor this one. Without people interested in using your data model, again, you don't have a network.

What Else?

What have I forgotten? Tell me in the comments.

Monday, June 20, 2016

SAS Programming package for Sublime Text is now available via Package Control

Just a quick note to convey the information in the title there.  Thanks to some lovely help from the community (especially @friedegg, @seemack and @bobfourie) the SAS package for Sublime is now more functional than ever, and is finally installable via the wonderful Package Control.

So if the previous, janky installation procedure has thus far kept you from trying out Sublime for SAS programming, do please give it a whirl.

Friday, February 26, 2016

Sublime Text, Intel Graphics and CTRL-ALT-ARROW hotkeys

Just in case this saves someone else (or future me) some head-scratching.

Work issued me a new laptop, onto which I promptly installed ST3.  When I first hit ctrl-alt-down arrow to go into column select mode my display turned upside down.  I've seen this dozens of timesby now--this is the graphics card add-on program making it easy to rotate my display (though why 180 degrees is a useful rotation that they want to serve is unclear).  I hit ctrl-alt-up arrow to put things back to rights, and then go into the system area to look for the (in my case Intel) icon to turn off its hot-keys.

But now I go back into ST3 and those key combos don't work at all.  Display stays properly oriented, but my sublime cursor moves noplace.

Is ST3 even receiving those keypresses?  I open its console with ctrl-` and type "sublime.log_input(True)" to turn on key event logging.  Nada, zip, nothing--something is completely eating those keypreses.  I google for a bit (mostly finding posts from people who have inadvertently rotated their displays and don't know how to un-rotate them) and didn't find anything useful.

So I pull up the full Intel HD Graphics Control Panel app and go into the hotkeys section:
For a goof, I tried re-defining the rotate-y hotkey combinations from ctrl-alt-::something:: to shift-alt-::something::.

I enabled hot-keys & tried sublime again.  That worked.  Then I disabled hot-keys and it still worked.  So I'm calling it a fix.  This seems like a bug in Intel's utility though--it seems to be eating keystrokes that it's been told to ignore.

Tuesday, November 10, 2015

How we do Quality Assurance for the VDW

Last weekend I attended the most excellent PCORI Data Quality Code-a-thon, hosted by Michael Kahn and his colleagues over at University of Colorado, at which I met some really interesting and smart people doing really interesting work.  A couple of them evinced an interest in VDW QA work and I said I'd share the substantive checks that we are doing.

Some Context

This is volunteer work

Like most everything VDW, QA work is largely unfunded and distributed across implementing sites.  Volunteers from the data area workgroups (e.g., Utilization, Pharmacy, Enrollment) put together lists of checks pertaining mostly to their data areas, write VDW programs that implement the checks & periodically (generally annually, but sometimes more frequently) make a formal request that implementing sites run the code & submit their results to the program author(s) for collation & reporting out to the VDW Implementation Group.

One big implication of this is that our approach is not nearly as coordinated as an outside user might expect.  I'd like to say that we are evolving toward a common approach, and we do have a new(ish) "Cross-File QA" group that's taking on meta-standards for QA work, but there is definitely a long way to go before this is uniform enough to be coherent to anyone not familiar with the history.

QA Is Multi-purpose

We generally try to kill 2 birds with our QA stones.  Primarily we want to characterize the quality of our implementations for ourselves, each other, and our user community.  But we also love it when our reports are useful to Investigators writing grant applications, who sometimes need to brag about  e.g., how many person/years worth of data we have across the network for people with a drug benefit.

This can be a slippery slope, on occasion leading individual sites to declare that a given measure has strayed from QA (which is generally exempt from IRB approval) into substantive research territory, or else exposes what should be proprietary information.  One example that comes to mind on Enrollment was a measure of churn--e.g., in a typical month, how many enrollees does a site tend to lose to disenrollment, and how many do they pick up?  It's a constant dance/negotiation.

Roy's QA Prejudices

To my way of thinking the best QA:
  1. Enables implementers to find (and fix) their own errors first, before exposing them to any larger audience.  This is a matter of professional courtesy.
  2. Includes as many objective checks as are practical to implement in code, and presents the running user with:
    1. A clear list of what the checks are
    2. What the tolerance is for those checks (e.g., 3% of your patient language records can have nonstandard values, but any more than 5% and we're going to say you failed the check).
    3. Whether the file passed or failed each check.
  3. More general descriptives characterizing the amount and predominant values in the data.  These are often most useful when viewed as part of collated output so you can compare sites.
  4. Collated quality/descriptive reports 
    1. should be readily available to the user community (we have them up on the HCSRN web portal, behind the password-protected area).
    2. should be easily updated (completely automatically if possible) so that implementers are incentivized to fix whatever issues they can as soon as possible (and get credit for doing so).
Following the lead of the Utilization (encounters) workgroup, we generally refer to the objective checks as "Tier 1" checks and the descriptives as "Tier 2".  Like most things, the checks are a matter of negotiation within the workgroup.  I've come to think of them as crucial adjuncts to the specs themselves because they sometimes reveal reasonable disagreements on how to interpret the specs.

The Checks


Tier 1

  1. All variables listed in the spec exist and are the proper type (character or numeric).  I don't personally like to check length and so don't, though there is diversity of opinion on that & so some QA programs do.  There is no tolerance on these checks--any missing variable is a fail.
  2. For those variables that have an enumerated domain of permissible values (pretty much everything but MRN and birth_date) that those are the only values found.  If > 2% of the values found are off-spec we issue a warning.  At 5% or greater we fail the check.
  3. MRN is unique.  Zero tolerance here.

Tier 2

  1. Counts of records (not just current enrollees) by
    1. Gender
    2. Race
    3. Ethnicity
    4. Age group
    5. Need for an interpreter
  2. Counts of enrollees over time by those same variables.


Tier 1

  1. All variables listed in the spec exist and are the proper type (character or numeric). Here again zero tolerance.
  2. For those variables that have an enumerated domain of permissible values (all but MRN, the start/stop dates, PCP and PCC) that those are the only values found.  Same tolerance here as with demog-- > 2% is a warn, > 5% a fail.
  3. Enr_end must fall after enr_start.  Zero tolerance.
  4. Enr_end must not be in the future.  Warn at 1%, fail at 3%.
  5. At least one of the plan type flags (which say whether the person/period was enrolled in a PPO, HMO, etc.) must be set.  Warn at 2% and fail at 4%.
  6. Ditto for the insurance type flags (e.g., commercial, medicare/caid, etc.).
  7. If any of the Medicare part insurance flags are set, the general Medicare insurance flag must be set. Zero tolerance.
  8. No period prior to 2006 should have the Medicare Part D flag set. 1% warn, 2% fail.
  9. If the Part D flag is set, the drugcov flag must also be set.
  10. If the high-deductible health plan flag is set, either the commercial or the private-pay flag must also be set.
  11. If any of the incomplete_* variable values is 'X' (not implemented) then all values must be 'X'.
(That last check refers to six variables too new to be listed in the publicly available specs.  They let implementers surface known problems with data capture, if there are any (for example at Group Health we only have tumor registry information on people who live in one of the seventeen WA State SEER counties).

Tier 2

  1. Counts and percents of enrollees over time by all substantive enrollment variables--plan types, insurance types, drug coverage, etc.
  2. Counts & percents over time by several demographic variables (listed above under demographics).
  3. Counts & percents of enrollees over time by whether the values in primary care clinic (PCC) and primary care physician (PCP) appear to be valid (that is, contain at least one non-space and non-zero character).

More Later

As I'm the primary author of the E/D checks, these were the easiest to hand.  I'll be back with at least some of the other checks the other groups have implemented.

Wednesday, May 21, 2014

Axis Problem

How can I get the 0 value on this graph's x-axis to stay left-justified, without using a VALUES = statement on the XAXIS?

The code for this is:

data gnu ;
    @1   clinic        $char5.
    @7   report_date   date9.
    @19  readmit_rate  3.1
  format report_date monyy7. ;
datalines ;
north 31may2012   0.8
north 30jun2012   0.2
north 31jul2012   0.3
west  31may2012   0.0
west  30jun2012   0.0
west  31jul2012   0.0
run ;

options orientation = landscape ;

ods html path = "c:\temp" (URL=NONE)
         body   = "deleteme.html"
         (title = "Axis Problems")

    proc sgplot data = gnu ;
      hbar report_date / response = readmit_rate ;
      by clinic ;
      xaxis grid /* values = (0 to .1 by .1) */ ;
      yaxis grid ;
    run ;

ods _all_ close ;

Friday, January 17, 2014

Why aren't these colors the same?

When I run:
proc format ;
  value $sub
    "bibbity" = "5"
    "bobbity" = "30"
    "boo" = "80"
    "baz" = "120"
    "foo" = "150"
    "zoob" = "180"
quit ;

data test ;
  do subject = 'bibbity', 'bobbity', 'boo', 'baz', 'foo' ;
    do obs_date = '01-jan-2010'd to '31-dec-2013'd by 30 ;
      num_widgets = input(put(subject, $sub.), best.) + floor(uniform(4) * 30) ;
      proportion_blue = uniform(4) ;
      output ;
    end ;
  end ;
  format obs_date mmddyy10. proportion_blue percent9.2 ;
run ;

%let out_folder = c:/temp/ ;

ods graphics / height = 6in width = 10in ;
ods html path = "&out_folder" (URL=NONE)
         body   = "deleteme.html"
         (title = "Why are bubble color and line color not coordinated?")

  proc sgplot data = test ;
    loess  x = obs_date y = num_widgets / group = subject ;
    bubble x = obs_date y = num_widgets size = proportion_blue / group = subject transparency=0.5 ;
    xaxis grid ;
    yaxis grid ;
  run ;

ods _all_ close ;

I get:

I think it's confusing that the bubble colors and the loess line colors don't match.

If I change from loess to a series plot, the colors match.

Anybody know how I can get the colors to match?


Sunday, November 10, 2013

Loving R's ggplot2!

So I'm messing around with R for a coursera course I'm taking, and totally loving the ggplot2 library.  Check out this lovely plot of some loan data we got from the course instructor:

 That's a scatterplot with loess smoother + confidence intervals, all done with this simple call:

qplot(x     = FICO
    , y     = ir
    , data  = loansData
    , color = Loan.Length
    , geom  = c('point', 'smooth')
    , ylab  = "Interest Rate"
    , xlab  = "FICO Score"

How awesome is that?