add int8 matmul support to CUDA backend by verstatx · Pull Request #3508 · arrayfire/arrayfire · GitHub
Skip to content

add int8 matmul support to CUDA backend#3508

Merged
edwinsolisf merged 1 commit intoarrayfire:masterfrom
verstatx:int8_matmul
Mar 28, 2025
Merged

add int8 matmul support to CUDA backend#3508
edwinsolisf merged 1 commit intoarrayfire:masterfrom
verstatx:int8_matmul

Conversation

@verstatx
Copy link
Copy Markdown
Contributor

@verstatx verstatx commented Oct 4, 2023

Description

Adds support for int8 matmul in the CUDA backend using cublasGemmEx functions. This modifies the gemm functions' api to support a different output array type, so all backends were modified.

This PR depends on s8 support: #3507
Fixes: #1656

Checklist

@verstatx verstatx marked this pull request as draft October 4, 2023 10:52
@melonakos melonakos marked this pull request as ready for review February 11, 2025 21:48
@melonakos melonakos added this to the 3.10 milestone Feb 11, 2025
edwinsolisf
edwinsolisf previously approved these changes Mar 16, 2025
Copy link
Copy Markdown
Contributor

@edwinsolisf edwinsolisf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested on RTX 3070Ti all backends on Ubuntu & Windows

changes to gemm account for differing input/output types
@verstatx
Copy link
Copy Markdown
Contributor Author

@edwinsolisf edwinsolisf self-requested a review March 20, 2025 23:40
@edwinsolisf edwinsolisf merged commit ccac73e into arrayfire:master Mar 28, 2025
2 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add suport for int8 matmul in afcuda

3 participants