Intertial Sensor-Based Gait Analysis

This project is concerned with the development and evaluation of algorithms that allow an extensive analysis of the human gait based on inertial measurement data. Inertial measurement units (IMUs) are attached to the lower limb segments and provide accelerometer and gyroscope signals, each in three dimensions. The use of magnetometers is avoided to enable indoor use and ensure robustness in the presence of magnetic disturbances.
The subjects under consideration are amputees wearing leg prostheses as well as healthy subjects. From the measured inertial data, quantities are calculated that characterize the gait of the subject. Gait phases and joint angles are of particular interest. However, in order to gain this information from the raw data, one needs to develop advanced estimation schemes and algorithms that account for measurement biases.
Unlike previous results, our approach exploits the geometrical constraints induced by the joint mechanics instead of requiring complex calibration movements or exact sensor mounting. This new technique provides highly precise joint angle estimates. Furthermore, discrete events like heel strike and toe-off are detected to determine the current gait phase, and a corresponding automaton model is generated and updated online.
The outlined gait analysis system is designed to work both offline and online, on both prostheses and healthy legs. Its accuracy and reliability is compared to those of an optical 3D measurement system at the end of this page.

In the following, a brief description of the methods and results of both the gait phase detection and the joint angle estimation is provided.

Gait Phase Detection

Fig. 2: State automaton modeling the four phases of human gait and the transitions between these phases.

With only a single inertial sensor attached to the foot, a detailed gait phase detection is realized that separates the gait into four phases: loading response, foot flat, pre-swing, and swing phase. Transitions from one to the next gait phase are detected based on elaborate criteria on the measured accelerations and angular rates as well as their time derivatives and integrals. These transitions drive a state automaton which outputs the current gait phase at any point in time. The resulting gait phase detection is capable of dealing with various walking speeds, arbitrary walking directions, abrupt direction changes, stairs stepping, and unforeseen short-time ground contact in mid-swing. Besides the current gait phase, the algorithm provides accurate estimates of the foot-ground angle trajectory for each step. It will be further extended towards high-precision step length estimation.

Fig. 3: Gait phase detection for a single step based on measured accelerations and angular velocities. The depicted measurement signals have been transformed into a global coordinate system in which the z-axis is vertical. Subsequently, accelerations due to gravity have been removed. In the resulting signals, it is clear to see that both angular rates and accelerations almost vanish in foot flat phase, and that the accelerations peak in the moment of heel strike. Based on these and a number of more elaborate criteria, the step is successfully divided into four gait phases.

Joint Axis and Position Estimation

As explained below, the accuracy of IMU-based joint angle estimation highly depends on the knowledge of two fundamental pieces of information: That is, on the one hand, the (constant) orientation of the sensor’s coordinate systems with respect to the joint axis or the segments they are mounted on. This information can simply be captured by the coordinates of the joint axis' directional vector in the local sensor coordinates. On the other hand, the (constant) joint position vectors from the origins of the sensor frames to the joint center are required in many cases to apply drift-reducing techniques.

Fig. 4: Joint axis and joint position in local sensor coordinate systems.

Therefore, accurate estimation of the joint axis and joint position in the local sensor coordinates is the key to precise IMU-based angle measurement. In previous approaches, the joint axis is usually determined by restricting the mounting of the sensor to certain orientations or by performing complicated calibration movements, wherein the precision of the axis estimation depends on how precise the calibration movement is performed by the subject. The joint position has also been determined previously by restricting the mounting of the sensor to certain locations or, alternatively, by manual measurement after mounting. These methods might be suitable if there are even surfaces and right angles or a tight mechanical setup, as in most robotic systems. But in general, that is not the case on the human body. In contrast, it is desirable in practice to gain good results, even when attaching sensors to the human body in arbitrary mounting orientation. Furthermore, it is rather complicated to manually measure the 3d-vector from the sensor to the joint center. It might be cumbersome but possible on the ankle or shoulder, but is is almost impossible on the hip joint without the use of x-ray. In conclusion, none of the existing solutions are suitable in the practical context of gait analysis.

Fig. 5: Joint axis and position estimation (click on the figure to open an animation in a separate pdf-file). The accelerometer and gyroscope data from an arbitrary motion exploiting all degrees of freedom of the involved joints are analyzed. By exploiting kinematic constraints, the algorithm determines the coordinates of both the joint axis and joint positions in a few seconds.

Therefore, it is important to develop new methods for joint axis and position estimation. The key step towards improved methods is to note that the desired information is hidden in the inertial measurement data from almost any motion the subject might perform. Precisely, both on a hinge joint and on a spheroidal joint, the angular rates and the accelerations must fulfill kinematic constraints that incorporate the joint axis' directional vector (in the case of hinge joints) and the joint position vector (in the case of spheroidal joints). With a considerably small amount of measurement data, the axis and position can thus be estimated using nonlinear least squares techniques. The success of this method is demonstrated in simulations (below) and proven by experimental comparison to existing methods (also below).

Fig. 6: Convergence of joint axis and joint position estimation. One hundred runs with randomized initial conditions were performed, all of which converged to the global optimum, i.e. the true axis and position coordinates, within less than twenty update steps. The half circles mark the estimates (and variance) gained by using more complicated alternative procedures.

Convergence is global and there are no assumptions regarding the sensor mounting. Precision does not depend on the accuracy of motion. Instead, few seconds of an arbitrary motion are sufficient to generate suitable data, as long as all of the joint's degrees of freedom are excited. Finally, the variance in the results of multiple estimation runs is up to ten times smaller than in the results of previous methods like manual measurement and calibration movements. Therefore, the new method yields a significant improvement of the state-of-the-art and represents the best available approach to joint axis and position estimation. Its benefits in joint angle estimation from inertial data are demonstrated below.

Detailed explanations and results can be found in [1].

Joint Angle Estimation

First, we consider hinge (or pin) joints. There are several ways to estimate the joint angle of a hinge joint from the measured accelerations and angular velocities. Many of them use strap-down integration and some coordinate transformation. Other approaches, reduce the problem to rotations around the joint axis. In both cases, however, it is crucial to know the joint axis' directional vector in the local sensor coordinates. On the other hand, determining the joint position is required in many cases to apply drift-reducing techniques. Therefore, proper joint angle estimation depends on the knowledge of joint axis and joint position, both in the coordinates of the local sensor frames. In the section above, we explain how this crucial information can be determined from gyroscope and accelerometer measurements by exploiting kinematic constraints.

The upper and lower leg are equipped with one IMU each. Based on the outlined joint axis and position estimation, the knee joint angle is calculated from accelerations and angular rates using Kalman Filter techniques. The obtained angle is compared to the results of a 3d optical gait analysis. Since the experiment is performed by an amputee wearing a leg prosthesis, the method's accuracy can be evaluated both on rigid mechanical setup of the prosthesis and on the rather flexible human leg.

Fig. 7: Comparison of IMU-based knee joint angle measurements and the results of an optical gait analysis for the gait of an amputee wearing a leg prosthesis. On both legs, the new IMU-based method proofs to be highly accurate. On the prosthesis, however, the precision is particularly high due to the more rigid mechanical setup.

In further analyses the novel methods are used to estimate the ankle joint angle (dorsiflexion) on both the artificial and the human leg. Again, the result is compared to the angles gained from a 3d optical gait analysis.

Fig. 8: Comparison of IMU-based ankle joint angle (dorsiflexion) measurements and the results of an optical gait analysis for the gait of an amputee wearing a leg prosthesis. The angle that was measured on the artificial ankle differs clearly from the physiological ankle joint angle. But both on the prosthesis and on the contralateral leg, the new IMU-based method proofs to be highly accurate.

The comparison demonstrates that precise angle measurement is feasible with inertial sensors using the novel methods. For the knee joint angle, higher accuracy is achieved on a leg prosthesis than on the contralateral leg, where skin and muscle motions slightly affect both the optical and the inertial measurements in moments of heel strike and toe-off.

Detailed explanations and results can be found in [2]. All of the presented methods are suitable for online use. For a typical online application in the context of closed-loop control see ILC of FES-Assisted Gait.

Publications

  1. T. Seel, T. Schauer, J. Raisch. Joint Axis and Position Estimation from Inertial Measurement Data by Exploiting Kinematic Constraints. In IEEE Multi-Conference on Systems and Control, pages 45–49, Dubrovnik, Croatia, 2012.
  2. T. Seel, T. Schauer. IMU-based Joint Angle Measurement Made Practical. In Proc. of the 4th European Conference on Technically Assisted Rehabilitation - TAR 2013, Berlin, Germany, 2013.
Go to Editor View