Monday, December 19, 2011

Symmetric Type Sharing in ML

In the Standard ML language, there are two primary mechanisms for refining a module signature after the fact. One is a symmetric mechanism called type sharing and the other called where type, aka fibration. These are useful for facilitating modular programming by enabling reuse of module signatures (i.e., the syntactic "type" of a module).

signature EX_SIG0 = 
sig
 structure M :
       sig
          type t
          type u
       end
 datatype d = A of M.t
 sharing type M.u = d
end

signature EX_SIG1 =
sig
 structure M :
     sig
         type t
         datatype u = A
    end
 structure N :
   sig
         datatype t = B
         type u = M.u
  end
sharing type  N.t = M.t
end

In the first example, EX_SIG0, datatype d must come after the structure spec for M because it depends on M.t. However, since M.u depends on d, we have a circularity. The type sharing constraint ties M.u and d back together. In the second example, we have the same principle but the symmetry is with two structure specs. This particular example can be resolved with recursion, but since production implementations of Standard ML do not support recursion, symmetric type sharing gives us the equivalent effect. In both of these example, in the absence of recursion, where type cannot be used to replace symmetric type sharing. In most other cases, however, where type can replace type sharing. The reason this matters is that the semantics of type sharing is considerably more complicated than that of where type. Module type checking requires a separate step for resolving all the type sharing, whereas where type can be easily reduced by pushing the type definition in question into the appropriate signature spec.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.