Skip to content

Conversation

@mikepapadim
Copy link
Member

No description provided.

… and kernels.

- Adapt tensors, task graphs, and layer planners to support `HalfFloatArray`.
- Replace FP32 arrays with FP16-compatible implementations in key inference states (`wrapX`).
- Add new FP16-specific kernels for data transfer and activation computations.
- Optimize Q8_0 quantized operations with FP16 tensor support for improved efficiency.
- Update `State` classes and TornadoVM integrations to utilize FP16 data structures for key activation paths.
…maintainability by adding step-by-step comments and simplifying scaled output computation.
…n and remove obsolete hacky methods.

- Replace `loadTornadoTensorAsFP32` with `loadTornadoTensor` for cleaner tensor loading.
- Add logging for tensor loading details in `loadTornadoTensor`.
- Remove `copyHack` method and associated comments from compute kernels and logits layer.
- Update `wrapX` state in inference to utilize `asHalfFloatArray` for FP16 support.
- Cleanup redundant initialization and tasks in FP16 logits layer.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants