Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add static analysis using fortitude, clang, and ruff #187

Merged
merged 13 commits into from
Nov 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
97 changes: 97 additions & 0 deletions .github/workflows/linting.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# workflow to run static-analysis and linting checks on source

name: StaticAnalysis

# Controls when the workflow will run
on:
# Triggers the workflow on pushes to the "main" branch and any pull request events
push:
branches: [ "main"]
pull_request:

# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:

# Workflow run - one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "static-analysis"
static-analysis:
# The type of runner that the job will run on
runs-on: ubuntu-latest
strategy:
fail-fast: false

# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout code
uses: actions/checkout@v4

- name: Install Python
uses: actions/setup-python@v5
with:
python-version: '3.x'

- name: Install Dependencies
run: |
python -m pip install --upgrade pip
python -m venv ftorch_venv
. ftorch_venv/bin/activate
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt

# Run CMake build to get compile commands for clang
- name: FTorch CMake
run: |
. ftorch_venv/bin/activate
export FT_DIR=$(pwd)
VN=$(python -c "import sys; print('.'.join(sys.version.split('.')[:2]))")
export Torch_DIR=${VIRTUAL_ENV}/lib/python${VN}/site-packages
jatkinson1000 marked this conversation as resolved.
Show resolved Hide resolved
export BUILD_DIR=$(pwd)/src/build
mkdir ${BUILD_DIR}
cd ${BUILD_DIR}
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=${BUILD_DIR} -DCMAKE_Fortran_FLAGS="-std=f2008" -DCMAKE_EXPORT_COMPILE_COMMANDS=ON

# Apply Fortran linter, fortitude
# Configurable using the fortitude.toml file if present
- name: fortitude source
if: always()
run: |
cd ${{ github.workspace }}
. ftorch_venv/bin/activate # Uses .clang-tidy config file if present
fortitude check src/

# Apply C++ and C linter and formatter, clang
# Configurable using the .clang-format and .clang-tidy config files if present
- name: clang source
if: always()
uses: cpp-linter/cpp-linter-action@v2
id: linter
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
jatkinson1000 marked this conversation as resolved.
Show resolved Hide resolved
with:
style: 'file'
tidy-checks: ''
# Use the compile_commands.json from CMake to locate headers
database: ${{ github.workspace }}/src/build
# only 'update' a single comment in a pull request thread.
thread-comments: ${{ github.event_name == 'pull_request' && 'update' }}
- name: Fail fast?!
if: steps.linter.outputs.checks-failed > 0
run: exit 1

# Apply Fortran linter, fortitude to examples
- name: fortitude examples
if: always()
run: |
cd ${{ github.workspace }}
. ftorch_venv/bin/activate
fortitude check examples

- name: ruff
if: always()
run: |
cd ${{ github.workspace }}
. ftorch_venv/bin/activate
ruff format --diff ./
ruff check --diff ./
32 changes: 0 additions & 32 deletions .github/workflows/python_qc.yaml

This file was deleted.

1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,7 @@ venv/
ENV/
env.bak/
venv.bak/
ftorch_venv/

# Spyder project settings
.spyderproject
Expand Down
20 changes: 14 additions & 6 deletions examples/1_SimpleNet/pt2ts.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@
import os
import sys
from typing import Optional
import torch

# FPTLIB-TODO
# Add a module import with your model here:
# This example assumes the model architecture is in an adjacent module `my_ml_model.py`
import simplenet
import torch


def script_to_torchscript(
Expand Down Expand Up @@ -120,10 +120,10 @@ def load_torchscript(filename: Optional[str] = "saved_model.pt") -> torch.nn.Mod
# Set the name of the file you want to save the torchscript model to:
saved_ts_filename = "saved_simplenet_model_cpu.pt"
# A filepath may also be provided. To do this, pass the filepath as an argument to
# this script when it is run from the command line, i.e., `./pt2ts.py path/to/model`.
# this script when it is run from the command line, i.e. `./pt2ts.py path/to/model`.

# FPTLIB-TODO
# Save the PyTorch model using either scripting (recommended where possible) or tracing
# Save the PyTorch model using either scripting (recommended if possible) or tracing
# -----------
# Scripting
# -----------
Expand All @@ -132,7 +132,9 @@ def load_torchscript(filename: Optional[str] = "saved_model.pt") -> torch.nn.Mod
# -----------
# Tracing
# -----------
# trace_to_torchscript(trained_model, trained_model_dummy_input, filename=saved_ts_filename)
# trace_to_torchscript(
# trained_model, trained_model_dummy_input, filename=saved_ts_filename
# )

print(f"Saved model to TorchScript in '{saved_ts_filename}'.")

Expand Down Expand Up @@ -161,11 +163,17 @@ def load_torchscript(filename: Optional[str] = "saved_model.pt") -> torch.nn.Mod
print("Saved TorchScript model working as expected in a basic test.")
print("Users should perform further validation as appropriate.")
else:
raise RuntimeError(
model_error = (
"Saved Torchscript model is not performing as expected.\n"
"Consider using scripting if you used tracing, or investigate further."
)
raise RuntimeError(model_error)

# Check that the model file is created
filepath = os.path.dirname(__file__) if len(sys.argv) == 1 else sys.argv[1]
assert os.path.exists(os.path.join(filepath, saved_ts_filename))
if not os.path.exists(os.path.join(filepath, saved_ts_filename)):
torchscript_file_error = (
f"Saved TorchScript file {os.path.join(filepath, saved_ts_filename)} "
"cannot be found."
)
raise FileNotFoundError(torchscript_file_error)
7 changes: 6 additions & 1 deletion examples/1_SimpleNet/simplenet.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,4 +51,9 @@ def forward(self, batch: torch.Tensor) -> torch.Tensor:
output_tensor = model(input_tensor)

print(output_tensor)
assert torch.allclose(output_tensor, 2 * input_tensor)
if not torch.allclose(output_tensor, 2 * input_tensor):
result_error = (
f"result:\n{output_tensor}\ndoes not match expected value:\n"
f"{2 * input_tensor}"
)
raise ValueError(result_error)
11 changes: 6 additions & 5 deletions examples/1_SimpleNet/simplenet_infer_fortran.f90
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,14 @@ program inference
use, intrinsic :: iso_fortran_env, only : sp => real32

! Import our library for interfacing with PyTorch
use ftorch
use ftorch, only : torch_model, torch_tensor, torch_kCPU, torch_delete, &
torch_tensor_from_array, torch_model_load, torch_model_forward

! Import our tools module for testing utils
use ftorch_test_utils, only : assert_allclose

implicit none

! Set working precision for reals
integer, parameter :: wp = sp

Expand All @@ -21,7 +22,7 @@ program inference
real(wp), dimension(5), target :: in_data
real(wp), dimension(5), target :: out_data
real(wp), dimension(5), target :: expected
integer :: tensor_layout(1) = [1]
integer, parameter :: tensor_layout(1) = [1]

! Set up Torch data structures
! The net, a vector of input tensors (in this case we only have one), and the output tensor
Expand All @@ -40,7 +41,7 @@ program inference
end do

! Initialise data
in_data = [0.0, 1.0, 2.0, 3.0, 4.0]
in_data = [0.0_wp, 1.0_wp, 2.0_wp, 3.0_wp, 4.0_wp]

! Create Torch input/output tensors from the above arrays
call torch_tensor_from_array(in_tensors(1), in_data, tensor_layout, torch_kCPU)
Expand All @@ -54,7 +55,7 @@ program inference
write (*,*) out_data(:)

! Check output tensor matches expected value
expected = [0.0, 2.0, 4.0, 6.0, 8.0]
expected = [0.0_wp, 2.0_wp, 4.0_wp, 6.0_wp, 8.0_wp]
test_pass = assert_allclose(out_data, expected, test_name="SimpleNet", rtol=1e-5)

! Cleanup
Expand Down
11 changes: 9 additions & 2 deletions examples/1_SimpleNet/simplenet_infer_python.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@

import os
import sys

import torch


Expand Down Expand Up @@ -42,7 +43,8 @@ def deploy(saved_model: str, device: str, batch_size: int = 1) -> torch.Tensor:
output = output_gpu.to(torch.device("cpu"))

else:
raise ValueError(f"Device '{device}' not recognised.")
device_error = f"Device '{device}' not recognised."
raise ValueError(device_error)

return output

Expand All @@ -60,4 +62,9 @@ def deploy(saved_model: str, device: str, batch_size: int = 1) -> torch.Tensor:

print(result)

assert torch.allclose(result, torch.Tensor([0.0, 2.0, 4.0, 6.0, 8.0]))
if not torch.allclose(result, torch.Tensor([0.0, 2.0, 4.0, 6.0, 8.0])):
result_error = (
f"result:\n{result}\ndoes not match expected value:\n"
f"{torch.Tensor([0.0, 2.0, 4.0, 6.0, 8.0])}"
)
raise ValueError(result_error)
20 changes: 14 additions & 6 deletions examples/2_ResNet18/pt2ts.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@
import os
import sys
from typing import Optional
import torch

# FPTLIB-TODO
# Add a module import with your model here:
# This example assumes the model architecture is in an adjacent module `my_ml_model.py`
import resnet18
import torch


def script_to_torchscript(
Expand Down Expand Up @@ -126,10 +126,10 @@ def load_torchscript(filename: Optional[str] = "saved_model.pt") -> torch.nn.Mod
# Set the name of the file you want to save the torchscript model to:
saved_ts_filename = "saved_resnet18_model_cpu.pt"
# A filepath may also be provided. To do this, pass the filepath as an argument to
# this script when it is run from the command line, i.e., `./pt2ts.py path/to/model`.
# this script when it is run from the command line, i.e. `./pt2ts.py path/to/model`.

# FPTLIB-TODO
# Save the PyTorch model using either scripting (recommended where possible) or tracing
# Save the PyTorch model using either scripting (recommended if possible) or tracing
# -----------
# Scripting
# -----------
Expand All @@ -138,7 +138,9 @@ def load_torchscript(filename: Optional[str] = "saved_model.pt") -> torch.nn.Mod
# -----------
# Tracing
# -----------
# trace_to_torchscript(trained_model, trained_model_dummy_input, filename=saved_ts_filename)
# trace_to_torchscript(
# trained_model, trained_model_dummy_input, filename=saved_ts_filename
# )

print(f"Saved model to TorchScript in '{saved_ts_filename}'.")

Expand Down Expand Up @@ -167,11 +169,17 @@ def load_torchscript(filename: Optional[str] = "saved_model.pt") -> torch.nn.Mod
print("Saved TorchScript model working as expected in a basic test.")
print("Users should perform further validation as appropriate.")
else:
raise RuntimeError(
model_error = (
"Saved Torchscript model is not performing as expected.\n"
"Consider using scripting if you used tracing, or investigate further."
)
raise RuntimeError(model_error)

# Check that the model file is created
filepath = os.path.dirname(__file__) if len(sys.argv) == 1 else sys.argv[1]
assert os.path.exists(os.path.join(filepath, saved_ts_filename))
if not os.path.exists(os.path.join(filepath, saved_ts_filename)):
torchscript_file_error = (
f"Saved TorchScript file {os.path.join(filepath, saved_ts_filename)} "
"cannot be found."
)
raise FileNotFoundError(torchscript_file_error)
Loading
Loading