Algorithms for Adaptive Control

Based on the previous analysis, estimated source localization was dramatically affected by a variable distance between two microphones. As the distance between two microphones varied the number of angles for source estimation was subject to change. This variation increased the error within our system, and it was concluded the distance between microphones should be held constant.

 

Continuing with optimizing source estimation, we wanted to observe the behavior of the system as the distance between microphone pairs changed while holding the pairs’ distance constant. In other words, we wanted to investigate strategies to maximize the resolution of the estimated source position based on the change in distance between microphone pairs. Finally, we implemented algorithms for the design of a MatLab controller to govern the position of each microphone pair. After testing and simulation, we arrived at four questions or criteria that need to be meet to optimize the source localization. The picture below shows a visual representation of the system being analyzed.

The biggest concern this system faces is the ability to produce an estimated source position. An estimated source position depends on the cross correlation of two estimated angles; however, if our system is estimating the same angle for the microphone pairs, an estimated source position cannot be produced. Therefore, our first strategy in the control of the microphone pairs was to position them in a way such that two different angles were produced for source estimation.  Our first algorithm:

 

Are the microphones in near field or far field?

 

This question accomplished two aspects of the system. The first was to determine if the current robot positions are able to estimate the acoustic source. As previously stated, this depends on whether the microphones are receiving two different estimation angles. If the robots are in far field, meaning the microphones are receiving the same estimation angle, the controller will move the robots further apart until their positions are in the near field.

Secondly, this algorithm will establish the acoustic limitation of the system. As the robots move closer to the source, there is a point at which the movements could put the microphone array back in the far field. The position in which the robots move from near field to far field is known as the acoustic limitation. These positions are critical for source estimation because if the robots are estimating a source and then move into far field, the estimated source position will be lost. So, this algorithm also identifies the acoustic limitation of our system.

Our second algorithm asks,

 

Is the microphone array centered?

 

This is an important strategy for accurately estimating the position of the sound source. Intuitively, this makes sense because the best resolution for source estimation is directly in front of the microphone array. This algorithm is significant because the microphone array will continue to move laterally until the center of the array is directly in line with the estimated source position. However, one important aspect of the algorithm is the threshold we have introduced to the controller. A threshold accounts for the error within the system. Without a threshold, the microphone array would continue to jump around until it finally centers exactly in front of the estimated source. By introducing a threshold, we tell the system that the array is in ideal location to obtain an estimated acoustic position. Furthermore, it is important to note that the threshold is an input to the system and can be changed accordingly depending on the parameters of the system.

Thirdly, we had to consider the limitations of our system. The two limitations in optimizing the acoustic source estimation were the physical constraints of the robots and the acoustic limitation. Therefore, the third algorithm:

 

Have the robots reached their acoustic or physical limits?

 

These constraints were two factors which prevented the system from locating the acoustic source. The most obvious limitation is the physical constraint of the robots. This constraint means the robots could not move closer together because they were physically touching one another. As previously stated, there is an acoustic limitation for the microphone array. By asking this question, the controller can determine if the microphone array has reached this limitation. Once the robots start moving closer together, the field of resolution increases providing optimal estimation, but closing the distance of the microphone array can result in the robots crashing into each other or in entering far field. The Figure below visually describes the limiting factors we had to consider when evaluating this system. Considering these constraints, we were able to design a controller which will continually estimate the acoustic source position.

The final strategy of our controller optimizes the resolution around the estimated acoustic source. This final algorithm determines if the change in position of the robots helped increase the resolution of the system. Finally we ask,

 

Is the current critical resolution better than the previous critical resolution?

 

The critical resolution of the system is defined as the average distance between the six closest estimation positions and the current estimated source. The average distance is significant because small distances between the possible estimation points and the current estimated position mean an increase in resolution. Moreover, the controller bases further movements on increased resolution. If the current resolution is better than the previous resolution, the robots will continue to adjust the microphone array; however, if the resolution happens to get worse or decrease, the controller tells the robots to move back to the previous position. If the robots move back to a previous position based on visibility of the estimation field, the controller has reached a stopping point and will not tell the robots to move anymore.

These four algorithms are the cornerstone for our system’s robotic control. The block diagram below summarizes the main components of the Robotic Microphone Sensing controller. Based on observation and simulation, the strategies implemented into our controller result in optimal acoustic source estimation. The result of our controller can be seen in the last graphs on this page.

 

 

 

Washington University in St. Louis

Department of Electrical and Systems Engineering

Robotic Microphone Sensing: Optimizing Source Estimation and Algorithms for Adaptive Control

Chase LaFont - Undergraduate Research Summer/Fall ‘09