Skip to content

Commit

Permalink
SOP
Browse files Browse the repository at this point in the history
  • Loading branch information
Kye committed Jul 7, 2023
1 parent 287b49c commit 3b78c06
Show file tree
Hide file tree
Showing 2 changed files with 173 additions and 20 deletions.
155 changes: 154 additions & 1 deletion testing/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,97 @@
# Pre-training Testing Suite SOP for Transformer LLMs

The following Standard Operating Procedure (SOP) outlines the process and checkpoints needed to evaluate and test a Language Learning Model (LLM) based on a Transformer architecture before pretraining. Following these steps will help ensure the model is ready for training and will operate as expected.

## 1. Model Architecture Review
- Confirm the model architecture aligns with the specific NLP task.
- Ensure the number of layers, dimensions, heads, and other configuration parameters are set as intended.
- Validate the selection of activation functions, loss functions, and optimization methods.

## 2. Forward Pass Test
- Use a sample input to perform a forward pass and verify the output.
- Check the output shape matches the expected shape.

## 3. Backward Pass Test
- Conduct a backward pass to ensure the model can correctly calculate gradients.
- Validate that the gradients are not null, NaN, or infinite.

## 4. Parameter Initialization Test
- Check that all layers and their parameters are correctly initialized.
- Inspect the weights before and after a forward and backward pass to ensure they update correctly.

## 5. Optimizer and Loss Function Test
- Ensure the optimizer and loss function are appropriate for the task.
- Verify that the loss reduces and the model learns during initial training phases.

## 6. Data Loader Test
- Confirm the data loaders provide data in the correct format and batch size for the model.
- Validate any data augmentation procedures used.

## 7. Learning Rate Scheduler Test
- If a learning rate scheduler is used, verify it is properly set up and adjusts the learning rate as expected.

## 8. Hardware Compatibility Test
- Make sure the model, data, and all necessary components are correctly moved to the desired device (CPU, GPU, or TPU).

## 9. Reproducibility Test
- Set random seeds for all components that introduce randomness to ensure the model training is reproducible.

### Pre-training Testing Suite Example

```python
import unittest
import torch
import torch.optim as optim

class PretrainingTest(unittest.TestCase):

def setUp(self):
self.model = Andromeda
self.optimizer = optim.Adam(self.model.parameters())
self.loss_function = torch.nn.CrossEntropyLoss()
self.input_tensor = torch.randint(0, 256, (1, 1024)).cuda()

def test_forward_pass(self):
output = self.model(self.input_tensor)
self.assertEqual(output.shape, (1, 1024, 64007)) # Verify output shape

def test_backward_pass(self):
self.optimizer.zero_grad()
output = self.model(self.input_tensor)
loss = self.loss_function(output, self.input_tensor)
loss.backward()
for name, parameter in self.model.named_parameters():
self.assertFalse(torch.isnan(parameter.grad).any(), f'Gradient for {name} contains NaNs')
self.assertFalse(torch.isinf(parameter.grad).any(), f'Gradient for {name} contains Infs')

def test_optimizer_step(self):
initial_params = [param.clone() for param in self.model.parameters()]
output = self.model(self.input_tensor)
loss = self.loss_function(output, self.input_tensor)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
for initial_param, param in zip(initial_params, self.model.parameters()):
self.assertFalse(torch.equal(initial_param, param), 'Model parameters did not change after an optimizer step')

def test_data_loader(self):
# Implement a data loader test here

def test_learning_rate_scheduler(self):
# Implement a learning rate scheduler test here

def test_hardware_compatibility(self):
# Implement a hardware compatibility test here

def test_reproducibility(self):
# Implement a reproducibility test here

if __name__ == '__main__':
unittest.main()
```

# Metrics

To ensure the performance and reliability of the language model, we need to track a variety of metrics. These include both quantitative measures, such as accuracy, perplexity, speed and memory consumption, as well as qualitative measures like coherence, relevance, and versatility of the generated responses. Here is a list of potential metrics to consider:

**1. Accuracy Metrics:**
Expand All @@ -19,4 +113,63 @@ To ensure the performance and reliability of the language model, we need to trac

This suite tests the model for speed (latency and throughput) and memory consumption. In addition to these, you should also conduct manual tests to evaluate the model's output on various inputs for coherence, relevance and versatility.

Remember, there is no specific test for accuracy metrics such as perplexity, BLEU score, ROUGE score or F1 score because these are often task-specific and need to be evaluated on a per task basis.
Remember, there is no specific test for accuracy metrics such as perplexity, BLEU score, ROUGE score or F1 score because these are often task-specific and need to be evaluated on a per task basis.


# Check List

When preparing a model for training, there are several things you need to check:

1. **Model Architecture**: The model's structure should be reviewed to ensure it aligns with the task at hand. This includes checking the number of layers, dimensions, heads, etc.

2. **Forward Pass**: Make sure that the model's forward pass works correctly. Given a random input, the model should be able to return an output of expected shape.

3. **Backward Pass**: Similarly, the model should be able to calculate gradients correctly when a random loss is back-propagated.

4. **Parameter Initialization**: Ensure all layers and their parameters are correctly initialized. This can be checked by inspecting the weights before and after a forward and backward pass.

5. **Optimizer and Loss Function**: Verify that the optimizer and loss function are appropriate for the task and are correctly linked to the model.

6. **Data Loaders**: Check the data loaders to make sure they are providing data in the correct format and batch size for the model.

7. **Learning Rate Scheduler**: If a learning rate scheduler is used, ensure it's properly set up.

8. **Hardware Compatibility**: Ensure that the model, data, and all necessary components are correctly moved to the desired device (e.g., GPU).

Preparing an extensive playbook and standard operating procedure for testing and evaluating Language Learning Models (LLMs) such as a Transformer-based NLP model prior to pretraining involves a set of rigorous steps that ensure the model is ready for training. Here's a comprehensive list of steps and checkpoints:

**1. Model Architecture Review:**
- Check if the model architecture aligns with the specific NLP task.
- Verify if the configuration parameters such as number of layers, dimensions, heads, etc., are set as intended.
- Ensure that the activation functions, loss functions, and optimization methods are selected appropriately.

**2. Forward Pass Test:**
- Ensure the model can make a forward pass with a dummy input data.
- Check if the output shape matches the expected shape.

**3. Backward Pass Test:**
- The model should calculate gradients correctly when a random loss is back-propagated.
- Validate that the gradients are not null, NaN, or infinite.

**4. Parameter Initialization Test:**
- All layers and their parameters should be correctly initialized.
- Inspect the weights before and after a forward and backward pass to make sure they're updating correctly.

**5. Optimizer and Loss Function Test:**
- Ensure that the optimizer and loss function are suitable for the task.
- Verify that the loss reduces and the model learns during initial training phases.

**6. Data Loader Test:**
- The data loaders should provide data in the correct format and batch size for the model.
- Validate data augmentation procedures if any are used.

**7. Learning Rate Scheduler Test:**
- If a learning rate scheduler is used, it should be properly set up and adjust the learning rate as expected.

**8. Hardware Compatibility Test:**
- Make sure the model, data, and all necessary components can be correctly moved to the desired device (CPU, GPU, or TPU).

**9. Check for Reproducibility:**
- Make sure that the model training is reproducible by setting random seeds for all components that introduce randomness.

The following is an implementation of the pre-training testing suite:
38 changes: 19 additions & 19 deletions testing/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,29 +40,29 @@ def test_optimizer_step(self):
for initial_param, param in zip(initial_params, self.model.parameters()):
self.assertFalse(torch.equal(initial_param, param), 'Model parameters did not change after an optimizer step')

def test_prediction(self):
start_time = time.time()
prediction = self.model(self.test_input)
latency = time.time() - start_time
# def test_prediction(self):
# start_time = time.time()
# prediction = self.model(self.test_input)
# latency = time.time() - start_time

self.assertLess(latency, 1) # test if latency is less than 1 second
self.assertEqual(prediction.shape, (1, 1024, 64007)) # test if output shape si correct
# self.assertLess(latency, 1) # test if latency is less than 1 second
# self.assertEqual(prediction.shape, (1, 1024, 64007)) # test if output shape si correct


def test_memory_consumption(self):
start_mem = torch.cuda.memory_allocated()
prediction = self.model(self.test_input)
end_mem = torch.cuda.memory_allocated()
mem_diff = end_mem - start_mem
self.assertLess(mem_diff, 2 * 1024**3) # memory diff should be less than 2gb
# def test_memory_consumption(self):
# start_mem = torch.cuda.memory_allocated()
# prediction = self.model(self.test_input)
# end_mem = torch.cuda.memory_allocated()
# mem_diff = end_mem - start_mem
# self.assertLess(mem_diff, 2 * 1024**3) # memory diff should be less than 2gb

def test_throughput(self):
start_time = time.time()
for _ in range(100):
prediction = self.model(self.test_input)
end_time = time.time()
throughput = 100 / (end_time - start_time)
self.assertGreater(throughput, 10) # model should handle atleast at 10 inferences per second
# def test_throughput(self):
# start_time = time.time()
# for _ in range(100):
# prediction = self.model(self.test_input)
# end_time = time.time()
# throughput = 100 / (end_time - start_time)
# self.assertGreater(throughput, 10) # model should handle atleast at 10 inferences per second


if __name__ == "__main__":
Expand Down

0 comments on commit 3b78c06

Please sign in to comment.