DESIGN OF AN UNMANNED GROUND VEHICLE CAPABLE OF AUTONOMOUS NAVIGATION

ABSTRACT

This project is a multidisciplinary initiative with an objective to develop a scalable robotic platform of ground and aerial robotic swarms that are capable of tracking an object in an unknown environment. Conventionally, most robotic platforms perform tasks independently and are not designed to perform tasks in coordination with other robotic agents, thereby limiting their effectiveness in missions with critical time constraints. In contrast, the robotic platform developed in this project comprises of autonomous robots that are deployed as part of a large network of swarm robots. The ground robots receive mission objectives either from an operator or from a supervisory robot like an unmanned aerial vehicle (UAV). After receiving its mission objectives, the robot uses its own embedded intelligence to complete the mission in coordination with other robots. The robotic platform comprises of different subsystems that facilitate the control, coordination and communication between internal subsystems and other robots in the swarm network. Each robot consists of a chassis, sensor suite, drive unit, battery management system, navigation system, master control unit and a communication link. The robot is built on a differential wheel drive chassis system that supports locomotion in rugged terrains. The sensor suite of the robot enables it to develop a sense of situational awareness by measuring a range of physical parameters. It further provides a 360-degree view of the surroundings with limited hardware by using novel mounting strategies. The drive unit interfaces with the high torque motors and controls the speed and direction of the robot, while the battery management system ensures that all the subsystems receive the required power to remain operational. The master control unit acts as the brain of the robot and takes local control action to ensure that the mission objectives are completed most reliably. The robot has an inertial navigation system that performs position estimation, path planning and trajectory control. The communication link for the robot is built on the Zig-bee platform and supports bidirectional, secure data transfer between all the robotic agents. The developed robotic platform has higher probability in tracking and locating objects in lesser time as all the robotic agents work in parallel and share localized intelligence with each other, thereby facilitating better situational awareness. This report outlines the design of the different subsystems for the unmanned ground vehicle.

CHAPTER 1 : BACKGROUND

The scope of the project is to develop an autonomous robotic platform that is capable of performing a diverse range of tasks. The robot may be independently controlled by an operator or may be part of a larger network of swarm robots and may receive top level instructions from a master robot and the designed robot might just function as a slave robot which will carry out designated tasks. Either ways the platform of the robot architecture is expected to remain the same. The complete platform is being developed on a hybrid distributed embedded system comprising of two main controllers: The Raspberry Pi and the PSoC (Programmable system on Chip). The fundamental idea behind using two controllers is to ensure the real-time performance of the entire system.

The robot is built on a differential wheel drive chassis system comprising of four high torque DC geared motors. The two motors on either side of the robot are connected in parallel thereby enabling the control system to control two speeds i.e. of the right and left wheels as a pair. The turning mechanism of the robot is based on the angular velocity arising out of differential velocities of the wheel. The length to breadth ratio of the chassis has been optimized to facilitate turning. The robot is also equipped with suspension system making it capable of navigating through rugged terrain where the obstacles are proportional to the size of the robot.

Two of the motors of the robot are equipped with optical encoders which give feedback on the shaft angle and the number of rotations of the shaft in unit time. The output from the optical encoder is in the form of quadrature pulses which need to be measured by the sensing system. The robot is also equipped with a variety of sensor suites ranging from forward-looking IR to avoid obstacles to GPSΒ and magnetometers to give aΒ directional heading.Β  The role of the PSoC is to measure the data from the encoders and the sensor suite and pass them on to the Raspberry Pi (RPi) for use in the control algorithm. The PSoC communicates with the RPi using the serial interface and handshaking is implemented to ensure the integrity of data. The RPi has all the control algorithms built into it and depending on where the robot needs to navigate it calculates the desired velocity of both the wheels and communicates this information to the PWM units which generates the desired PWM on two separate channel which is further given to the motor driver units that finally connects to the motors and implements the speed control of the system.

The block diagram shown in figure 1, elucidates the complete working of the system:

Fig 1: A block diagram of the prototype robotic platform

CHAPTER 2 : THE RASPBERRY PI

OBJECTIVE

In this section we investigate how the raspberry pi can be used to develop and integrate β€˜scalable software and hardware subsystems’ for an autonomous robotic platform.

SCOPE OF THE RASPBERRY PI IN THE PROJECT

  1. To develop a robust interface between the control unit and the digital encoders connected to the motor.
  2. To integrate an array of digital and analog sensors with the control unit.
  3. To develop a communication stack layer at different nodes and to identify a suitable communication protocol to transmit relevant data between different control unit nodes of the platform.
  4. To implement supervisory control of the robotic platform and integrate multiple functionalities like navigation control, terrain mapping based on sensor fusion data.

PROPOSED ARCHITECTURE

The proposed system consists of five functional blocks which include the Raspberry-Pi (RPi), PSoC (Programmable system on chip), Sensor array, GPS and the Motor assembly.

The Raspberry-pi (RPi) which is a pocket-sized single board computer with an ARM11 processor functions as the master controller of the system. Figure 2 shows the different peripherals the board can support. The RPi is responsible for supervisory control of the system. The RPi is interfaced to a slave controller β€˜PSoC’ which is a dedicated processor that is interfaced to all the sensor modules and feedback devices in the system. The PSoC captures the sensor fusion data, following which it aggregates the data and attaches a time stamp to it and then wraps the data onto a communication stack and transmits it to RPi. The RPi decodes the COM stack received from the PSoC and takes the necessary control action. The RPi directly manipulates the control signals to the motor drivers and consequently alters the trajectory of the robot. The sensor array provides necessary environment variables that may assist in the secondary or primary objectives of the mission. The GPS subsystem provides closed loop navigation support to facilitate accurate path trajectory.

Fig 2: A complete overview of the Raspberry Pi 2.0 B, the top section shows the pinmap and the lower section shows the location of the different peripherals.

INTRODUCTION TO RPI

Fig 3: The raspberry Pi, A closer look at the board

Raspberry Pi as seen in figure 3, is an open source Linux based embedded prototyping platform that has dramatically revolutionized the concept of a single board pocket sized computer. The Raspberry pi is essentially offers all the essential tools and utilities that a desktop computer can offer. The Raspberry pi runs an operating system called the Raspbian which is a modified version of the Linux operating system, this ensures that the price of the platform is low. The Raspberry pi offers significant enhancements over the traditional microcontroller prototyping platforms however, this is at the cost of the real-time behavior of the platform. An application that has a number of tasks and involves the use of an internet enabled platform is the perfect use case of the raspberry pi. The raspberry pi is a versatile platform and supports programming in GUI environments like scratch, python based shell scripting, C, java, PERL and a range of other languages. The raspberry pi connects to the outside world through peripherals like the HDMI which can be integrated to any HDMI enabled display. The raspberry Pi version 2B plus support 4 USB host ports where slave devices such as keyboard, mouse, Wi-Fi-dongle and web camera can be connected. The limits of the raspberry pi is solely dependent on the imagination of the developers.

2.2 CODING ON THE RPI

2.2.1Β  WRITING AN OS IMAGE TO RPI

Fig 4: A screenshot of the win 32 disk imager software
  1. In order to load an image of the operating system on the SD card you will need to install an imaging utility called Win32DisckImager. It is an open source tool available from the Source forge developers.
  2. After inserting the SD card onto the adapter connected to the PC make sure that you know the assigned directory.
  3. Run the Win32DiskImager as an administrator and select the raspbian image file then select the corresponding drive and burn the image on the selected drive as shown in figure 4. It is important to note that any mistake in selecting the drive at this stage can have severe impact on the functioning of the PC and can also lead to a system crash.
  4. In order to burn the image on the SD card select the Write option and wait till the writing process is complete.

2.2.2Β Β  PLUGGING IN YOUR RASPBERRY PI

Fig 5: A closer look at the HDMI output of the raspberry pi
  1. Prior to plugging in the RPi ensure that you have all the materials listed above. Thereafter follow the instructions below.
  2. Insert the SD card into the designated slot on the RPi.
  3. Plug in the HDMI cable as shown in figure 5 to a HDMI enabled display so that you can view the command line interface on the screen and enter the login details as shown in figure 6.
Fig 6: A screenshot of the command line on the Raspberry Pi during start-up
  1. The default user name is β€˜pi’ and the default password is β€˜raspberry’. Note that when you type the password, it will not be displayed on the screen, this is an inbuilt security in the Linux OS.
  2. In order to connect the raspberry pi to the internet you must plug in an Ethernet cable connected to an open LAN network.
  3. After completing all the above steps you can plug in the power jack on the RPi. This will boot the RPi and you can observe a chain of commands on the display that stops after it requests for the login details.
  4. Enter the login details as described above and after the console reaches the second pause and requests for an input command type β€˜startx’
  5. This will enable the GUI mode and you can see the Raspberry pi GUI environment from which you can carry out all the desired operations.

CHAPTER 3 : MATLAB AND RASPBERRY PI

Fig 7: System architecture of the interfaces between Raspberry Pi and other peripherals

The aim of the demo is to establish a robust communication channel between the RPi and external devices. One of the possible ways of doing this reliably is by using serial communication as shown in figure 7. In our system we will use RS232 standard for the physical layer.

The system configuration for serial communication is 8 data, no parity, 1 start and 1 stop bit. The communication happens at a BAUD rate of 9600 to minimize BAUD rate error that is prominent in higher frequencies.

The smart sensor is emulated on an 8bit microcontroller that sends data on the serial bus. The raspberry Pi receives the data decodes the data and acknowledges the same after each data packet. The Serial to USB converter is used to monitor and visualize the data on a computer screen. The additional node does not take any active part in the communication in the bus.

The RPi having operating voltages below that of the emulated smart sensors requires a logic shifter for safe operation. This is typically achieved by using a high impedance buffer which can take 2 supply voltages.

The emulated smart sensor sends serial data on the bus, the raspberry Pi receives and decodes these messages, following which the internal algorithm classifies the appropriate action for the given input. The corresponding units of LEDs are turned on.

GENERAL INTERFACE DETAILS:

Raspberry Pi 2 Model B is the platform for the project.

Firmware development of Raspberry Pi is done on Matlab using the matlab support package for raspberry pi.

An Ethernet communication link is used to establish connection with the target (RPi).

The matlab code (.m file) is compiled and run on the computer, which controls the RPi and its peripherals via signals over the Ethernet protocol.

A modified version of the Raspbian wheezy OS recommended by the matlab support package is used to boot the raspberry pi.

CHAPTER 4 : SAMPLE CODES

4.1 GENERAL PURPOSE INPUT OUTPUT

The Raspberry pi features a good number of GPIO pins which can vary depending on the version of the RPi being used. All the GPIO pins are 3.3V compatible and can function as both input and output pins. It is important to note here that since these pins are 3.3V compatible logic level shifters are required when interfacing between the RPi and other 5V compatible devices.

The code below shows how to configure a gpio pin as input and output and how to write digital data on the pin and how to read digital data on the pin. It’s important to understand that the RPi has different pin numbering layouts which need to be specified in the code. In this report we will use the BCM standard issued by Broadcom.

!/usr/bin/env python

Import RPi.GPIO as GPIO Import time

GPIO.setmode(GPIO.BCM) GPIO.setup(17, GPIO.OUT)

For I in range(0,10):
GPIO.output(17,

1) Time.sleep(1)

GPIO.output(17,

0) Time.sleep(1)

4.2 SERIAL COMMUNICATION

The RPi features a full duplex serial protocol with BAUD upto 11520. These can be integrated with the RS232 standard or with the RS485 standard and can also be used as TTL devices.

The code below reads a single character from the RX line and print it on the terminal window. After running the code below it is easy to notice that even when there is no data in the RX line the RPi records a junk value.

!/usr/bin/env python

Import time Import serial

Ser = serial.Serial (
Port = β€˜/dev/ttyAMA0’ , Baudrate = 9600,
Parity = serial.PARITY_NONE,

Stopbits =

serial.STOPBITS_ONE,

Bytesize = serial.EIGHTBITS,

Timeout = 1
)

While 1:
Serialdata =

The following code adds an extra line that ensures that the junk values are omitted and the bug in the previous version of the code is fixed.

!/usr/bin/env python

Import time
Import
serial

Ser = serial.Serial (
Port = β€˜/dev/ttyAMA0’ ,
Baudrate = 9600,
Parity = serial.PARITY_NONE,
Stopbits =

Bytesize = serial.EIGHTBITS,
Timeout = 1
)

While 1:
Serialdata =
ser.read(1)
If serialdata != β€˜ ’:
Print serialdata

The following code below is commonly known as the echo program as it emulates an echo. Any character received on the RX line is sent back to the TX line. This is a standard test to check the integrity of the serial module.

!/usr/bin/env python

Import time
Import
serial

Ser = serial.Serial (
Port = β€˜/dev/ttyAMA0’ ,
Baudrate = 9600,
Parity =
serial.PARITY_NONE,
Stopbits = serial.STOPBITS_ONE,
Bytesize = serial.EIGHTBITS,
Timeout = 1
)

While 1:
Serialdata =
ser.read(1) If

The following code implements the handshake signal and reads 8 bits of data transmitted by the PSoC. First the character H is transmitted and received by the PSoC following which it waits for the character A from the PSoC after which it responds with a K and then reads 8 bits of data and displays the data in the terminal window.

!/usr/bin/env python

Import time
Import
serial

Ser = serial.Serial (
Port = β€˜/dev/ttyAMA0’
, Baudrate = 9600,
Parity =
serial.PARITY_NONE,
Stopbits =
serial.STOPBITS_ONE,
Bytesize = serial.EIGHTBITS,
Timeout = 1
)
Ser.write(β€˜H’)

While 1:
Serialdata =
ser.read(1) If
serialdata != β€˜K’:
While 1:
Ser.write(β€˜A’)

4.3 ROBOT LOCOMOTION

The following code implements the open loop navigation of the robot. In this code the raspberry pi uses GPIO channel 20 and 21 for direction control and uses GPIO 18 for PWM control and GPIO23 as input pin.

!/usr/bin/env python

Import RPi.GPIO as
GPIO Import time
Dc=30

GPIO.setmode(GPIO.BCM)
GPIO.setup(18,GPIO.OUT

P = GPIO.PWM(18, GPIO.OUT)
P.start(0)
GPIO.setup(20,
GPIO.OUT)
GPIO.setup(21,
GPIO.OUT)
GPIO.setup(23,
GPIO.IN)

While (GPIO.input(23) != 1): Time.sleep(0.001)

For j in range (1,5)

go straight GPIO.output(20,1) GPIO.output(21,1) # ramp up

For I in range (0,70):
p.ChangeDutyCycle(dc+i) time.sleep(0.05)
time.sleep(0.2) # go left
p.ChangeDutyCycle(0) time.sleep(0.5) GPIO.output(20,1) GPIO.output(21,0) For I in range (30,71):
p.ChangeDutyCycle(i)

CHAPTER 5 : ENCODERS DOCUMENTATION

Fig 8: An internal view of the gear mechanism inside the high torque DC motor

The geared motor system obeys a simple equation: PCR1 . Ο‰1 = PCR2 . Ο‰2

For approximately 74.83 rotations of the base motor shaft the wheel shaft rotates by one revolution, this can be better understood be observing figure 8.

Fig 9: A screenshot of the quadrature encoder pulses from the motor shaft encoders
  1. Channel 1 and channel 2 are outputs of the two Hall Effect sensors attached to the encoder module.
  2. Two Hall Effect sensors in the encoder module are redundant, but are for plausibility check, if one fails the system should not be compromised.
  3. For one revolution of the base motor, both channel 1 and channel 2 have 12 rising edges
    and 12 falling edges (per channel) as seen in figure 9.
  4. Total edges for one base motor shaft revolution = channel 1 RE + channel 2 RE + channel 1
    FE + channel 2 FE = 12 + 12 + 12 + 12 = 48.
Fig 10: The product description available commercially

{(No of rising + falling edges on channel 1) + (No of rising + falling edges on channel 2)} x gear ratio = Total number of edges for one revolution of wheel shaft
48 x 74.83 = 3591.84

Note: The underlined section corresponds to one revolution of base motor shaft

Encoder output from one channel is sufficient to compute the instantaneous speed of the robot wheels
12 falling edges = 1 revolution of base motor

1 revolution of wheel = 74.83 base motor shaft revolutions (taken from figure 10) No. of edges in 1 revolution of wheel = 12 x 74.83 = 897.96
Distance covered by the robot in one revolution of wheel = 2Ο€r (r = radius of the wheel) Algorithm to compute speed using a high precision timer and the above data.

Assumption: Wheels have no slip and no sliding i.e. the surface is ideal and friction is uniform.

WAIT till falling edge is detected (via interrupt) If falling edge detected :
Start Timer : Time = 0
Check if β€œfalling edge count” == 898 False:
increment falling edge count
True:
wheel revolutions ++ falling edge count = 0 Stop Timer : Time = xx break
Speed = 2Ο€r / Time
Total Distance = (2Ο€r . wheel revolutions) + (2Ο€r . (falling edge count / 898))

Assumption: Wheels have no slip and no sliding i.e. the surface is ideal and friction is uniform.

CHAPTER 6 : THE PROXIMITY SENSOR

A most commonly used design of a proximity sensor available in market is shown in figure 11.

As you can see, it has a potential divider network, a comparator IC (LM385), IR LED, photodiode and two resistors. Its range is about 10 to 15 cms. It has three pins, one for supply (5 volts), and another for ground, and the last one for output. Based on the preset resistance on POT, the sensor gives a high output for the particular distance proximity.

Fig 11: The proximity sensor

These sensors are commonly used in the field to robotics, to design line follower, obstacle avoider, wall follower, cliff avoider, etc. robots.

We recommend PCBs, PCB prototype, PCB manufacturer, PCB fabrication, PCB manufacturing

WORKING OF THE PROXIMITY SENSOR

An IR LED is a type of light emitting diode which emits infrared radiations. A photo resistor is a type of photo dependent which has a high output resistance in absence of light. In this circuit LM358 is being used as a comparator between the output of the photo resistor (to the non- inverting terminal) and a reference voltage set by a potentiometer (to the inverting terminal).

The output of the comparator is then connected to a diode as shown in figure 12.

Now a +VCC is obtained when the IRLED and photo resistor face a white platform and zero when they face a black platform. The figure below shows a basic circuit diagram for the following module. The working is based on the principle of absorption of IR radiations by various objects. The sensing system consists of an IR led and an IR photodiode. The system is arranged such that the IR LED and IR photodiode are placed parallel to each other. Whenever an object comes in front of the system, the object absorbs a certain portion and reflects the remaining IR radiation. The emanated IR radiation is received on the photodiode which is kept in a voltage divider arrangement. The output of the photodiode is fed to the comparator which gives a bi-state output. The distance till which the sensor can detect an obstacle can be increased by varying the resistance of the variable resistor connected to the non-inverting terminal of the op-amp.

Fig 12: A schematic of the IR proximity detection sensor

CHAPTER 7 : THE PWM GENERATION SYSTEM

In order to control the speed and the direction of the robot it is necessary to control the RPM of each wheel of the robot separately. Hence, it is necessary to generate two separate PWM signals using Raspberry pi (for the two MOSFET drivers). The system consists of a Raspberry pi connected to a MOSFET motor driver IRF8734PbF. The PWM outputs from the Raspberry pi are applied as control signals to the motor driver based on the feedback received from the speed encoder.

There are two ways of generating a PWM signal in Raspberry Pi

  1. Hardware PWM:
  2. A dedicated GPIO is used for generating the PWM signal
  3. In RPi, pin 1 is the only hardware pin available
  4. Provides a very good resolution ( 1 microsecond)
  5. Low jitter
  • Software PWM:
  • All GPIO pins can be used for generation of PWM signal
  • Resolution is less (100 microsecond) when compared to Hardware PWM
  • High jitter

There are several physical constraints for the system for instance, the motor has two MOSFET drivers and this requires the generation of 2 separate PWM signals from Raspberry pi. Since Raspberry pi has only a single hardware PWM, we might want to use the software PWM pins available in RPi for generating the other PWM output. I2C and SPI enabled smart chips like the Adafruit 16 channel PWM driver can be used and enhances the PWM capability of raspberry pi.

The PWM frequency limitations are discussed in this section. The Maximum resolution (Tres) that can be provided by RPi for a software PWM is 100 us and if we require very fine changes in the PWM signal such that a 1% change in duty cycle change is required, the default PWM range (R) is 100

1Β Β Β Β  1

Γ—

?Β Β Β  ?

= ? =

1

1

? Γ— ????

= 100 Γ— 100 Γ— (10)βˆ’6 = 100 ??

This gives a default PWM frequency of 100 Hz. As the minimum pulse width can’t go below 100 us and given the range constraint, the frequency is limited to the order of Hertz. For a hardware PWM, maximum resolution (Tres) is 1 us. The default PWM frequency is 10 kHz and the range is

100. To get a higher frequency, we can lower the range or vice versa. However, for precise closed loop operation like motor speed control, a high range is desired. The maximum switching frequency of the MOSFET driver is 40 kHz and the minimum switching frequency is 10 kHz. The operating frequency of the MOSFET is within the range of 10 kHz to 40 kHz.

The PWM signal should be much greater than the resonant frequency of the motor which is usually around 50 Hertz i.e. F > fres of the motor, This is because if the frequency of PWM signal is in the order of Hz, whenever there is a change in duty cycle such that the frequency applied to the driver circuit is close to the resonant frequency, it will lead to interference and cause damage to RPi and might also stall the motor. We will use 10 kHz, which is the default PWM frequency of hardware PWM. Therefore to conclude, hardware PWM is preferred over software PWM owing to its better resolution and better operating frequency range.

HARDWARE PWM GENERATION USING I2C INTERFACE

The Raspberry Pi has only one hardware PWM pin and we require two independent PWM pins – one for each motor driver. To get a resolution and frequency range same as that of a hardware PWM pin and which meets our requirements, we go for a PWM driver that uses I2C for communication.

I2C Definition:

I2C, known as Inter-Integrated Circuit is a series bus that connects multiple devices together and enables them to communicate with each other. The devices are termed as masters and slaves based on the operations they perform. Usually, there is one master and multiple slave devices

are connected to the master. The communication uses only two signal wires (Serial Data and Serial Clock) to exchange information.

Configuring and installing I2C in Raspberry Pi:

Raspberry pi is provided with two I2C interfaces (two SDA and SCK pins). We have made use of the pins 3 and 5 for SDA and SCK respectively. Multiple devices can be connected to the Raspberry Pi using the I2C interface. Each device can be accessed by a unique address associated with them.

Installing the i2c tools utility

The following commands are entered to enable the System Management Bus (SMBus) and install the I2C tools. The SMBus is a subset of I2C protocol and provides commands for the I2C.

sudo apt-get install python-smbus sudo apt-get install i2c-

Installing the Kernel Support

The command sudo raspi-config is run to install the i2c support for ARM core and Linux Kernel. Once the raspberry config window is open, select

advanced options β†’A7 i2c and enable the ARM interface.

Testing the I2C:

Following command should be entered in the command line window if Raspberry Pi model B is used.

sudo i2cdetect -y 1

Fig 13: A screenshot of the command line window giving details of the I2C

communication Figure 13 tells us that two I2C addresses 0x40 and 0x70 are in use. The user can use any of them. Adafruit 16-channel,12-bit PWM Driver:

A 16-channel, 12 bit PWM driver is used to generate the PWM signals for both the motors. The driver is capable of generating PWM signals up to 1 MHz. As discussed before, due to the lack of availability of more than one hardware PWM channel in the Raspberry pi, the multi-channel PWM driver is required to produce necessary PWM signals.

5) Connecting Raspberry Pi to the PWM driver:

Wiring:

PCA9685 is the chip that is used as the driver which is powered with the voltage of 3.3V from the Raspberry Pi. Since we are giving the PWM signal to a motor driver and the operating voltage of the motor driver is also 3.3V, the voltage that is given to power the motor driver, that is, V+ is also given to be 3.3V.

Figure 14 shows the connections from the Raspberry Pi to the PWM Driver

Fig 14: A schematic of the PWM generation system

HARDWARE PWM CODE:

The Adafruit PWM Driver has a default library which is being imported here to make use of the PWM operations. The link for the same is attached below:

https://github.com/adafruit/Adafruit-PWM-Servo-Driver Library/blob/master/Adafruit_PWMServoDriver.cpp

The function pwm.setPWMFreq(frequency value) is used to set the appropriate PWM frequency required. The function pwm.setPWM(channel,on,off) is used to set the pulse width to appropriate value at the specified channel. Since the PWM driver has a 12-bit resolution, the on and off value can be set to a range of values from 0 to 4095.

Source: DESIGN OF AN UNMANNED GROUND VEHICLE CAPABLE OF AUTONOMOUS NAVIGATION


About The Author

Muhammad Bilal

I am highly skilled and motivated individual with a Master's degree in Computer Science. I have extensive experience in technical writing and a deep understanding of SEO practices.

Scroll to Top