FADiff: Optimizing DNN Scheduling on Tensor Accelerators with Fusion-Aware Differentiable Optimization
Analysis
This research explores differentiable optimization techniques for DNN scheduling, specifically targeting tensor accelerators. The paper's contribution lies in the fusion-aware aspect, likely improving performance by optimizing operator fusion.
Key Takeaways
Reference
“FADiff focuses on DNN scheduling on Tensor Accelerators.”