|9:25am - 11:40am||
Robots on Haskell
Anthony Cowley (Invited talk)
Type providers and error reflection in Idris
Idris is a pure, dependently typed programming language that aims to be usable for real work. However, there's more to a language than its type system! This talk will demonstrate the construction of an embedded database query language in Idris, relying heavily on type providers and error reflection.
Idris's type providers, inspired by the feature of the same name in F#, allow effectful computations to be run at compile time to generate terms that can be used to type check later parts of the program. Unlike F# or Haskell, ordinary Idris functions can compute types, so an approach relying on compile-time code generation is not necessary. The embedded query language uses a type provider to statically check queries against the schema of a live database.
Error reflection is a feature that allows Idris libraries to define their own custom compiler errors. These are used to give useful feedback to users of the database library.
Although it will primarily consist of a live demonstration of writing Idris code, the talk will also touch on relevant aspects of the language implementation.
Modules and Abstraction
Mechanisms for abstraction have their roots in interpreting the existential quantifier from logic as a type. I'll describe Reynolds' idea of "representation independence" as a formal basis for specifying abstraction as a program property, and I will compare these seeds of theory to their modern implementations as module systems in Standard ML and OCaml.
|12:50pm - 3:05pm
The F# Approach to Relaxation
Don Syme (Invited Talk)
Managing Bioinformatics Pipelines Within OCaml
Sebastien Mondet and Jeff Hammerbacher
Ketrew means *Keep Track of Experimental Workflows*: it is a workflow engine based on an Embedded Domain Specific Language. The EDSL is a simple OCaml library providing combinators to define complex and convoluted workflows (interdependent steps/programs using a lot of data, with many parameter variations, running on different hosts with various schedulers) and submit them to the engine. The engine takes care of orchestrating the run of those workflows, and keeping track everything that succeeds, fails, or gets lost. Ketrew can be a standalone application, or use a client-server architecture.
At Mount Sinai (the Hammer Lab), we develop and use Ketrew to run bioinformatics pipelines on very large amounts of data. An example of such workflow would be our cancer immunotherapy pipeline. For a given patient, DNA is extracted for the tumor and compared with a healthy-tissue sample; these *“variants”* are used to design treatments.
Biomedical computational pipelines involve various long-running computational steps; for each of them, we want to run many parameter variations. In the same time, bioinformatics tools are famous for being quite “bad” software; they tend to fail in mysterious ways, for seemingly valid inputs. The computing infrastructure is also very diverse and gives little control to the end user. Ketrew is designed with many of those adverse conditions in mind; but it is also a quite generic tool not at all restricted to bioinformatics, it has been used for running backups, building websites, ensuring continuous integration, etc.
LVars: lattice-based data structures for deterministic parallel and distributed programming
Parallel programming is notoriously difficult. A fundamental reason for this difficulty is that programs can yield inconsistent answers, or even crash, due to unpredictable interactions between parallel tasks. But it doesn't have to be this way: guaranteed-deterministic parallel programming models promise freedom from subtle, hard-to-reproduce nondeterministic bugs. In a deterministic program, the same input always produces the same output; in a guaranteed-deterministic parallel programming model, then, that property is true of every program written using the model, even when those programs are run in parallel.
So, what are these parallel programming models that guarantee determinism, and how do they work? In this talk, we'll survey the landscape of guaranteed-deterministic approaches to parallel programming. Then, we'll dig into lattice-based data structures, or LVars, which are the basis for a new guaranteed-deterministic parallel programming model that my colleagues and I developed. Throughout the talk, we'll look at examples written using LVish, our Haskell library for programming with LVars. Finally, we'll consider the relationship between LVars and conflict-free replicated data types for eventually consistent distributed systems, like the system that stores the contents of your Amazon shopping cart.
|3:20pm - 5:35pm||
OCaml at Bloomberg
Maxime Ransan (Invited Talk)
Proving things about Biology
BMA is a tool for modeling and analyzing Systems Biology models, such as cellular-level descriptions of skin and blood. In this talk, I will describe the F# implementation of several static analysis-based provers that do the heavy lifting in BMA. These include a temporal property checker, and a fast, purpose-built stability checker that scales by composing a lot of local, small proofs together.
Though this domain (automatically proving properties of systems) is one that is known to be amenable to functional programming techniques, the BMA implementation exploits several F# features. Almost all the data structures are immutable collections like Set and Map. The main fixed-point loop is implemented using the Seq lazy sequences library: transforming the loop from a recursive function to a Seq.fold was quick, yielding code that is elegantly asynchronous, and allows the TypeScript-based UI to pause, step, and fast-forward the proof search.
The analyzer uses FParsec (a parser combinator library) to parse the definition of the model's update functions, and Z3 (an SMT solver) as the underlying basis of the provers. The whole application runs on Azure.
elm-d3: Front-end Development without Frameworks