Tianma's Website
Tianma's Website
Home
Projects
Publications
Contact
Light
Dark
Automatic
1
Learned Image Compression with Transformers
In this paper, we propose a novel learning-based image coding system using transformer structures. Our context model codes latent representations in a channel-first order, followed by a 2D zigzag spatial order. Along with transformer structures, such context model more effectively extracts contextual information for better entropy coding. Further, we propose a transformer-based latent residual cross-attention prediction (LRCP) module to reduce the quantization error. Compared to existing learned image compression approaches and traditional image compression methods, our proposed model achieved significantly better perceptual quality and RD performance.
Tianma Shen
,
Ying Liu
Cite
Code
Da-bert, Enhancing part-of-speech tagging of aspect sentiment analysis using bert
With the development of Internet, text-based data from web have grown exponentially where the data carry large amount of valuable …
Songwen Pei
,
Lulu Wang
,
Tianma Shen
,
Zhong Ning
Cite
DATRA, A power-aware dynamic adaptive threshold routing algorithm for dragonfly network-on-chip topology
Due to the issues of significant power consumption and extra hops from source to destination in the Dragonfly topology of …
Songwen Pei
,
Jihong Yuan
,
Yanfei Ji
,
Tianma Shen
Cite
Cite
×