In psi.f, you'll find 3 lines of this form:

PXI=PXI+PSI3(I1,I2,I3,NIT(NB),NU,XI)*XE(NS) 

Instead you'll want a call like this:

PXI=PXI+PG_XI(NS,NU,NGPI,NB)*XE(NK) 

Where PG_XI is a new data structure that will need to be passed everywhere that psi.f is called. At some higher level location (perhaps as high as python) PG_XI will be calculated once for a single element for various XI values, NGPI=1,...NGPIT (grid points). This can be done during the Calculate mesh command. In other words, as long as XI1, XI2, and XI3 are not varying between elements you only need to calculate this once.

The new PG

The only other complication might be that XI1, XI2, and XI3 listed above will need to be integers rather than reals: you cannot index an array with anything other than an integer. So some mapping may be needed to make this work properly.

As a first step, PG_XI will simply be calculated using the old PSI3 function. Once this works, you can have the user dynamically author PSI3 (perhaps using sympy to generate some C).

ADM Notes 2011-Jan-30

  • IX1=ROUND(XI1/(NDIVS-1))
    IXI2=ROUND(XI2/(NDIVS-1))
    IXI3=ROUND(XI3/(NDIVS-1))
    
  • Adarsh has shown that you can do good surface rendering using his Bezier approximations with five divisions, i.e. XI1 = [0., 0.25, 0.5, 0.75, 1.0]
  • These pre-computed grids could replace hypercube grids in many listing commands, though we could also keep a list of fully arbitrary xi points
  • The Sympy Basis Function Editor should have one Basis Function per "Model"
  • Each Basis Function model would have the vector Xi[] as the only independent variable and the multidimensional matrix Phi as the dependent variable. Possible structures for Phi would be

    Phi[NN, NK, NU]
    

    or perhaps we would have several dependent variables such as

    Phi[NN, NK]
    dPhidXi[NN, NK, NI]
    d2PhidXi2[NN, NK, NI1, NI2]
    d3PhidXi1dXi2dXi3[NN,NK]
    

    or alternatively

    Phi[NS]
    dPhidXi[NS, NI]
    d2PhidXi2[NS, NI1, NI2]
    d3PhidXi1dXi2dXi3[NS]
    
  • The latter two choices have the advantage that symbolic differentiation expressions could be done vector or matrix-wise whereas each NU would have to be a separate expression with the first choice. The third choice would have the advantage that it would be more generic with respect to basis function type, not assuming nodal values and derivatives, but the second choice would have the advantage of discriminating between nodes and nodal parameters and make reading Hermite basis functions easier. However, I would guess that in either case each basis function would still be explicitly written out so the separate indices would not be used. It would be worth seeing how textbooks or other codes present Hermite basis functions. My guess is they are just a single vector which facilitates mathematically representing the interpolation itself as a simple inner product.
  • The introspection features of Sympy should be sufficient to determine many properties, though for example, if we used option 3, it would not be sufficient alone to determine NNT and NKT, only NST. This may be an argument for option 2.
  • Having Xi[] as an independent variable may also make it hard for Sympy to deduce the dimensionality of the element (i.e. NIT). One alternative would be to make Xi1, Xi2, Xi3 independent variables, and then the model equations would use these to initialize Xi[] accordingly. Thus for a 3D problem Xi[3] would not be initialized and thus the dimension of Xi[] would be sufficient to determine NIT. The unconventional thing about this would be that Xi[] would be a required variable but not predefined. Technically this could be a dependent variable, with the unusual property that that it would ordinarily have to be defined before the user equations. That would probably not be difficult to enforce. e.g.

    Xi=[Xi1,Xi2]
    
  • The template could even provide the default initialization:

    Xi=[Xi1,Xi2,Xi3]