Introduction
Currently there is no dedicated test tool in Lustre test suites for LNet testing. Lustre Unit Test Framework (LUTF) fills that gap to provide a means for testing existing LNet features as well as new features that would be added in future. It facilitates an easy way of adding new test cases/scripts to test any new LNet feature.
Objectives
This High Level Design Document describes the current LUTF design, code base, infrastructure requirements for its setup and the new features that can be added on top of the current design.
Reference Documents
Document Link |
---|
LNet Unit Test Infrastructure (LUTF) Requirements |
Document Structure
This document is made up of the following sections:
- Design Overview
- Building the LUTF
- LUTF-Autotest Integration
- Infrastructure
LUTF Design Overview
The LUTF is designed with a Master-Agent approach to test LNet. The Master and Agent LUTF instance uses a telnet python module to communicate with each other and more than one Agent can communicate with single Master instance at the same time. The Master instance controls the execution of the python test scripts to test LNet on Agent instances. It collects the results of all the tests run on Agents and write them to a YAML file. It also controls the synchronization mechanism between test-scripts running on different Agents.
The below diagram shows how LUTF interacts with LNet
Figure 1: System Level Diagram
Building the LUTF
To build LUTF, it first requires to set up an environment with all the required packages installed and then building using GNU build system like Lustre tree.
Following sub sections briefs on the steps for the building process.
Environment Set-Up
- Python 2.7.5 is required along with some other python related packages like -
- netifaces
- PyYAML
- paramiko (some MR test scripts are written using paramiko, so need to have this installed too)
- SWIG (Simplified Wrapper and Interface Generator) is required to generate a glue code to allow the python test scripts call DLC APIs.
- Password less SSH - Nodes running LUTF are required to setup password less SSH to each other.
Build along Lustre tree using GNU tools
- All the other test suites/scripts for lustre are placed under lustre/tests/ directory. Place LUTF as well under lustre/tests.
- Mention LUTF as a subdirectory to be build in lustre/tests/Makefile.am
- Create a Makefile.am under lustre/tests/lutf/ to generate the required object files and swig files.
- cd to lustre directory and Run sh autogen.sh
- Run ./configure
- Run make
LUTF/AT Integration
TBD: How the LUTF integrates in the AT
Infrastructure
Automatic Deployment
TBD: How does the AT deploy the LUTF, collect results, show results in Maloo
C Backend
This allows for the setup of TCP connection (TCP sockets) to connect the Master and Agent nodes (lutf.c). LUTF can be run on a node in either Master mode or an Agent mode.
Master mode:
Spawns a listener thread (lutf_listener_main) to listen to Agent connections (lutf.c).
- Maintains a list of the Agents
- Start up a python interpreter (lutf_python.c).
Provides a library which is SWIG wrapped and callable from python scripts (liblutf_agent.c).
Agent mode:
- Spawns a heart beat thread (lutf_heartbeat_main) to send a heart beat to master every 2 seconds. The master uses this Heart beat to determine the aliveness of the agents (lutf.c).
- Start up a python interpreter through Telnet (lutf_python.c).
Python
Script execution and result collection
how are scripts deployed from the Master to the AGent
How are the scripts executed
How are the results collected
Batch test
how should we execute a collection of tests. You can discuss how it's currently done, and if it can be imporved.
- Python Test infrastructure
- Infrastructure Level 1:
A python master script for this infrastructure would facilitate the following:- Deploy LUTF on all the Agent nodes and Master node.
- Provides a telnet server and client for Master<->Agent communication.
- Provides a mechanism to query IP addresses and the network interfaces (NIs) on the Agents. This information can further be fetched by the test scripts on demand using an API.
- Facilitates running individual python tests scripts on the Agents and collecting results.
- Facilitates running the auto-test script which is a test-suite of all the test scripts related to one particular feature.
- Facilitate synchronization between the tests running on different Agents by providing an API that uses notification mechanism. An example scenario of how it would be implemented is - as a test script runs to it completion on an Agent, it would notify the Master about its status by calling this API and then Master can redirect this event to any script waiting on it.
- Infrastructure Level 2:
With its implementation, the functions which are used by multiple test scripts are defined in a base test infrastructure file (lnet_test_infra_utils.py) which is then imported in each test script. This ease out the process of writing new test scripts and avoids code redundancy.
- Infrastructure Level 1: