Intertial Sensor-Based Gait Analysis
The subjects under consideration are amputees wearing leg prostheses as well as healthy subjects. From the measured inertial data, quantities are calculated that characterize the gait of the subject. Gait phases and joint angles are of particular interest. However, in order to gain this information from the raw data, one needs to develop advanced estimation schemes and algorithms that account for measurement biases.
Unlike previous results, our approach exploits the geometrical constraints induced by the joint mechanics instead of requiring complex calibration movements or exact sensor mounting. This new technique provides highly precise joint angle estimates. Furthermore, discrete events like heel strike and toe-off are detected to determine the current gait phase, and a corresponding automaton model is generated and updated online.
The outlined gait analysis system is designed to work both offline and online, on both prostheses and healthy legs. Its accuracy and reliability is compared to those of an optical 3D measurement system at the end of this page.
In the following, a brief description of the methods and results of both the gait phase detection and the joint angle estimation is provided.
Gait Phase Detection
With only a single inertial sensor attached to the foot, a detailed gait phase detection is realized that separates the gait into four phases: loading response, foot flat, pre-swing, and swing phase. Transitions from one to the next gait phase are detected based on elaborate criteria on the measured accelerations and angular rates as well as their time derivatives and integrals. These transitions drive a state automaton which outputs the current gait phase at any point in time. The resulting gait phase detection is capable of dealing with various walking speeds, arbitrary walking directions, abrupt direction changes, stairs stepping, and unforeseen short-time ground contact in mid-swing. Besides the current gait phase, the algorithm provides accurate estimates of the foot-ground angle trajectory for each step. It will be further extended towards high-precision step length estimation.

Joint Axis and Position Estimation
As explained below, the accuracy of IMU-based joint angle estimation highly depends on the knowledge of two fundamental pieces of information: That is, on the one hand, the (constant) orientation of the sensor’s coordinate systems with respect to the joint axis or the segments they are mounted on. This information can simply be captured by the coordinates of the joint axis' directional vector in the local sensor coordinates. On the other hand, the (constant) joint position vectors from the origins of the sensor frames to the joint center are required in many cases to apply drift-reducing techniques.
Therefore, accurate estimation of the joint axis and joint position in the local sensor coordinates is the key to precise IMU-based angle measurement. In previous approaches, the joint axis is usually determined by restricting the mounting of the sensor to certain orientations or by performing complicated calibration movements, wherein the precision of the axis estimation depends on how precise the calibration movement is performed by the subject. The joint position has also been determined previously by restricting the mounting of the sensor to certain locations or, alternatively, by manual measurement after mounting. These methods might be suitable if there are even surfaces and right angles or a tight mechanical setup, as in most robotic systems. But in general, that is not the case on the human body. In contrast, it is desirable in practice to gain good results, even when attaching sensors to the human body in arbitrary mounting orientation. Furthermore, it is rather complicated to manually measure the 3d-vector from the sensor to the joint center. It might be cumbersome but possible on the ankle or shoulder, but is is almost impossible on the hip joint without the use of x-ray. In conclusion, none of the existing solutions are suitable in the practical context of gait analysis.
Therefore, it is important to develop new methods for joint axis and position estimation. The key step towards improved methods is to note that the desired information is hidden in the inertial measurement data from almost any motion the subject might perform. Precisely, both on a hinge joint and on a spheroidal joint, the angular rates and the accelerations must fulfill kinematic constraints that incorporate the joint axis' directional vector (in the case of hinge joints) and the joint position vector (in the case of spheroidal joints). With a considerably small amount of measurement data, the axis and position can thus be estimated using nonlinear least squares techniques. The success of this method is demonstrated in simulations (below) and proven by experimental comparison to existing methods (also below).
Convergence is global and there are no assumptions regarding the sensor mounting. Precision does not depend on the accuracy of motion. Instead, few seconds of an arbitrary motion are sufficient to generate suitable data, as long as all of the joint's degrees of freedom are excited. Finally, the variance in the results of multiple estimation runs is up to ten times smaller than in the results of previous methods like manual measurement and calibration movements. Therefore, the new method yields a significant improvement of the state-of-the-art and represents the best available approach to joint axis and position estimation. Its benefits in joint angle estimation from inertial data are demonstrated below.
Detailed explanations and results can be found in [1].
Joint Angle Estimation
First, we consider hinge (or pin) joints. There are several ways to estimate the joint angle of a hinge joint from the measured accelerations and angular velocities. Many of them use strap-down integration and some coordinate transformation. Other approaches, reduce the problem to rotations around the joint axis. In both cases, however, it is crucial to know the joint axis' directional vector in the local sensor coordinates. On the other hand, determining the joint position is required in many cases to apply drift-reducing techniques. Therefore, proper joint angle estimation depends on the knowledge of joint axis and joint position, both in the coordinates of the local sensor frames. In the section above, we explain how this crucial information can be determined from gyroscope and accelerometer measurements by exploiting kinematic constraints.
The upper and lower leg are equipped with one IMU each. Based on the outlined joint axis and position estimation, the knee joint angle is calculated from accelerations and angular rates using Kalman Filter techniques. The obtained angle is compared to the results of a 3d optical gait analysis. Since the experiment is performed by an amputee wearing a leg prosthesis, the method's accuracy can be evaluated both on rigid mechanical setup of the prosthesis and on the rather flexible human leg.
In further analyses the novel methods are used to estimate the ankle joint angle (dorsiflexion) on both the artificial and the human leg. Again, the result is compared to the angles gained from a 3d optical gait analysis.
The comparison demonstrates that precise angle measurement is feasible with inertial sensors using the novel methods. For the knee joint angle, higher accuracy is achieved on a leg prosthesis than on the contralateral leg, where skin and muscle motions slightly affect both the optical and the inertial measurements in moments of heel strike and toe-off.
Detailed explanations and results can be found in [2]. All of the presented methods are suitable for online use. For a typical online application in the context of closed-loop control see ILC of FES-Assisted Gait.
Publications
-
↑
- T. Seel, T. Schauer, J. Raisch. Joint Axis and Position Estimation from Inertial Measurement Data by Exploiting Kinematic Constraints. In IEEE Multi-Conference on Systems and Control, pages 45–49, Dubrovnik, Croatia, 2012.
-
BibtexAuthor : T. Seel, T. Schauer, J. Raisch| PDF | DOI | Link
Title : Joint Axis and Position Estimation from Inertial Measurement Data by Exploiting Kinematic Constraints
In : In IEEE Multi-Conference on Systems and Control,
pages 45–49, Dubrovnik, Croatia, 2012.
Date : 2012
-
↑
- T. Seel, T. Schauer. IMU-based Joint Angle Measurement Made Practical. In Proc. of the 4th European Conference on Technically Assisted Rehabilitation - TAR 2013, Berlin, Germany, 2013.
-
BibtexAuthor : T. Seel, T. Schauer| Link
Title : IMU-based Joint Angle Measurement Made Practical
In : In Proc. of the 4th European Conference on Technically Assisted Rehabilitation - TAR 2013,
Berlin, Germany, 2013.
Date : 2013