default search action
Beidi Chen
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c32]Mostafa Elhoushi, Akshat Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Anas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, Ahmed A Aly, Beidi Chen, Carole-Jean Wu:
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding. ACL (1) 2024: 12622-12642 - [c31]Yuandong Tian, Yiping Wang, Zhenyu Zhang, Beidi Chen, Simon Shaolei Du:
JoMA: Demystifying Multilayer Transformers via Joint Dynamics of MLP and Attention. ICLR 2024 - [c30]Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis:
Efficient Streaming Language Models with Attention Sinks. ICLR 2024 - [c29]Ruisi Cai, Yuandong Tian, Zhangyang Wang, Beidi Chen:
LoCoCo: Dropping In Convolutions for Long Context Compression. ICML 2024 - [c28]Harry Dong, Xinyu Yang, Zhenyu Zhang, Zhangyang Wang, Yuejie Chi, Beidi Chen:
Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference. ICML 2024 - [c27]Youhe Jiang, Ran Yan, Xiaozhe Yao, Yang Zhou, Beidi Chen, Binhang Yuan:
HexGen: Generative Inference of Large Language Model over Heterogeneous Environment. ICML 2024 - [c26]Zirui Liu, Jiayi Yuan, Hongye Jin, Shaochen Zhong, Zhaozhuo Xu, Vladimir Braverman, Beidi Chen, Xia Hu:
KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache. ICML 2024 - [c25]Zhaozhuo Xu, Zirui Liu, Beidi Chen, Shaochen Zhong, Yuxin Tang, Jue Wang, Kaixiong Zhou, Xia Hu, Anshumali Shrivastava:
Soft Prompt Recovers Compressed LLMs, Transferably. ICML 2024 - [c24]Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian:
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection. ICML 2024 - [c23]Zhenyu Zhang, Shiwei Liu, Runjin Chen, Bhavya Kailkhura, Beidi Chen, Atlas Wang:
Q-Hitter: A Better Token Oracle for Efficient LLM Inference via Sparse-Quantized KV Cache. MLSys 2024 - [i49]Zirui Liu, Jiayi Yuan, Hongye Jin, Shaochen Zhong, Zhaozhuo Xu, Vladimir Braverman, Beidi Chen, Xia Hu:
KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache. CoRR abs/2402.02750 (2024) - [i48]Haizhong Zheng, Xiaoyan Bai, Beidi Chen, Fan Lai, Atul Prakash:
Learn To be Efficient: Build Structured Sparsity in Large Language Models. CoRR abs/2402.06126 (2024) - [i47]Harry Dong, Xinyu Yang, Zhenyu Zhang, Zhangyang Wang, Yuejie Chi, Beidi Chen:
Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference. CoRR abs/2402.09398 (2024) - [i46]Zhuoming Chen, Avner May, Ruslan Svirschevski, Yuhsun Huang, Max Ryabinin, Zhihao Jia, Beidi Chen:
Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding. CoRR abs/2402.12374 (2024) - [i45]Zhihang Yuan, Yuzhang Shang, Yang Zhou, Zhen Dong, Zhe Zhou, Chenhao Xue, Bingzhe Wu, Zhikai Li, Qingyi Gu, Yong Jae Lee, Yan Yan, Beidi Chen, Guangyu Sun, Kurt Keutzer:
LLM Inference Unveiled: Survey and Roofline Model Insights. CoRR abs/2402.16363 (2024) - [i44]Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian:
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection. CoRR abs/2403.03507 (2024) - [i43]Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang:
Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding. CoRR abs/2403.04797 (2024) - [i42]Harry Dong, Beidi Chen, Yuejie Chi:
Prompt-prompted Mixture of Experts for Efficient LLM Generation. CoRR abs/2404.01365 (2024) - [i41]Xuezhe Ma, Xiaomeng Yang, Wenhan Xiong, Beidi Chen, Lili Yu, Hao Zhang, Jonathan May, Luke Zettlemoyer, Omer Levy, Chunting Zhou:
Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length. CoRR abs/2404.08801 (2024) - [i40]Hanshi Sun, Zhuoming Chen, Xinyu Yang, Yuandong Tian, Beidi Chen:
TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding. CoRR abs/2404.11912 (2024) - [i39]Mostafa Elhoushi, Akshat Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Anas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, Ahmed A Aly, Beidi Chen, Carole-Jean Wu:
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding. CoRR abs/2404.16710 (2024) - [i38]Jianyu Zhang, Niklas Nolte, Ranajoy Sadhukhan, Beidi Chen, Léon Bottou:
Memory Mosaics. CoRR abs/2405.06394 (2024) - [i37]Minghan Li, Xilun Chen, Ari Holtzman, Beidi Chen, Jimmy Lin, Wen-tau Yih, Xi Victoria Lin:
Nearest Neighbor Speculative Decoding for LLM Generation and Attribution. CoRR abs/2405.19325 (2024) - [i36]Ruslan Svirschevski, Avner May, Zhuoming Chen, Beidi Chen, Zhihao Jia, Max Ryabinin:
SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices. CoRR abs/2406.02532 (2024) - [i35]Wentao Guo, Jikai Long, Yimeng Zeng, Zirui Liu, Xinyu Yang, Yide Ran, Jacob R. Gardner, Osbert Bastani, Christopher De Sa, Xiaodong Yu, Beidi Chen, Zhaozhuo Xu:
Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity. CoRR abs/2406.02913 (2024) - [i34]Ruisi Cai, Yuandong Tian, Zhangyang Wang, Beidi Chen:
LoCoCo: Dropping In Convolutions for Long Context Compression. CoRR abs/2406.05317 (2024) - [i33]Taiming Lu, Lingfeng Shen, Xinyu Yang, Weiting Tan, Beidi Chen, Huaxiu Yao:
It Takes Two: On the Seamlessness between Reward and Policy Model in RLHF. CoRR abs/2406.07971 (2024) - [i32]Ceyu Xu, Yongji Wu, Xinyu Yang, Beidi Chen, Matthew Lentz, Danyang Zhuo, Lisa Wu Wills:
VcLLM: Video Codecs are Secretly Tensor Codecs. CoRR abs/2407.00467 (2024) - [i31]Cheng Luo, Jiawei Zhao, Zhuoming Chen, Beidi Chen, Anima Anandkumar:
MINI-SEQUENCE TRANSFORMER: Optimizing Intermediate Memory for Long Sequences Training. CoRR abs/2407.15892 (2024) - [i30]Jian Chen, Vashisth Tiwari, Ranajoy Sadhukhan, Zhuoming Chen, Jinyuan Shi, Ian En-Hsu Yen, Beidi Chen:
MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding. CoRR abs/2408.11049 (2024) - [i29]Yang Zhou, Zhuoming Chen, Zhaozhuo Xu, Victoria Lin, Beidi Chen:
Sirius: Contextual Sparsity with Correction for Efficient LLMs. CoRR abs/2409.03856 (2024) - 2023
- [c22]Moses Charikar, Beidi Chen, Christopher Ré, Erik Waingarten:
Fast Algorithms for a New Relaxation of Optimal Transport. COLT 2023: 4831-4862 - [c21]Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Ré, Beidi Chen:
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time. ICML 2023: 22137-22176 - [c20]Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Beidi Chen, Percy Liang, Christopher Ré, Ion Stoica, Ce Zhang:
FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU. ICML 2023: 31094-31116 - [c19]Jue Wang, Yucheng Lu, Binhang Yuan, Beidi Chen, Percy Liang, Christopher De Sa, Christopher Ré, Ce Zhang:
CocktailSGD: Fine-tuning Foundation Models over 500Mbps Networks. ICML 2023: 36058-36076 - [c18]Stefano Massaroli, Michael Poli, Daniel Y. Fu, Hermann Kumbong, Rom N. Parnichkun, David W. Romero, Aman Timalsina, Quinn McIntyre, Beidi Chen, Atri Rudra, Ce Zhang, Christopher Ré, Stefano Ermon, Yoshua Bengio:
Laughing Hyena Distillery: Extracting Compact Recurrences From Convolutions. NeurIPS 2023 - [c17]Yuandong Tian, Yiping Wang, Beidi Chen, Simon S. Du:
Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer. NeurIPS 2023 - [c16]Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher Ré, Clark W. Barrett, Zhangyang Wang, Beidi Chen:
H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models. NeurIPS 2023 - [i28]Andrew Cohen, Weiping Dou, Jiang Zhu, Slawomir Koziel, Peter Renner, Jan-Ove Mattsson, Xiaomeng Yang, Beidi Chen, Kevin Stone, Yuandong Tian:
Modeling Scattering Coefficients using Self-Attentive Complex Polynomials with Image-based Representation. CoRR abs/2301.02747 (2023) - [i27]Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Daniel Y. Fu, Zhiqiang Xie, Beidi Chen, Clark W. Barrett, Joseph E. Gonzalez, Percy Liang, Christopher Ré, Ion Stoica, Ce Zhang:
High-throughput Generative Inference of Large Language Models with a Single GPU. CoRR abs/2303.06865 (2023) - [i26]Zhaozhuo Xu, Zirui Liu, Beidi Chen, Yuxin Tang, Jue Wang, Kaixiong Zhou, Xia Hu, Anshumali Shrivastava:
Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM Inference with Transferable Prompt. CoRR abs/2305.11186 (2023) - [i25]Yuandong Tian, Yiping Wang, Beidi Chen, Simon S. Du:
Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer. CoRR abs/2305.16380 (2023) - [i24]Jiawei Zhao, Yifei Zhang, Beidi Chen, Florian Schäfer, Anima Anandkumar:
InRank: Incremental Low-Rank Learning. CoRR abs/2306.11250 (2023) - [i23]Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher Ré, Clark W. Barrett, Zhangyang Wang, Beidi Chen:
H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models. CoRR abs/2306.14048 (2023) - [i22]Moses Charikar, Beidi Chen, Christopher Ré, Erik Waingarten:
Fast Algorithms for a New Relaxation of Optimal Transport. CoRR abs/2307.10042 (2023) - [i21]Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis:
Efficient Streaming Language Models with Attention Sinks. CoRR abs/2309.17453 (2023) - [i20]Yuandong Tian, Yiping Wang, Zhenyu Zhang, Beidi Chen, Simon S. Du:
JoMA: Demystifying Multilayer Transformers via JOint Dynamics of MLP and Attention. CoRR abs/2310.00535 (2023) - [i19]Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Ré, Beidi Chen:
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time. CoRR abs/2310.17157 (2023) - [i18]Stefano Massaroli, Michael Poli, Daniel Y. Fu, Hermann Kumbong, Rom N. Parnichkun, Aman Timalsina, David W. Romero, Quinn McIntyre, Beidi Chen, Atri Rudra, Ce Zhang, Christopher Ré, Stefano Ermon, Yoshua Bengio:
Laughing Hyena Distillery: Extracting Compact Recurrences From Convolutions. CoRR abs/2310.18780 (2023) - [i17]Youhe Jiang, Ran Yan, Xiaozhe Yao, Beidi Chen, Binhang Yuan:
HexGen: Generative Inference of Foundation Model over Heterogeneous Decentralized Environment. CoRR abs/2311.11514 (2023) - 2022
- [c15]Beidi Chen, Tri Dao, Kaizhao Liang, Jiaming Yang, Zhao Song, Atri Rudra, Christopher Ré:
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models. ICLR 2022 - [c14]Tri Dao, Beidi Chen, Nimit Sharad Sohoni, Arjun D. Desai, Michael Poli, Jessica Grogan, Alexander Liu, Aniruddh Rao, Atri Rudra, Christopher Ré:
Monarch: Expressive Structured Matrices for Efficient and Accurate Training. ICML 2022: 4690-4721 - [c13]Zichang Liu, Zhaozhuo Xu, Alan Baonan Ji, Junyan Zhang, Jonathan Li, Beidi Chen, Anshumali Shrivastava:
HALOS: Hashing Large Output Space for Cheap Inference. MLSys 2022 - [c12]Jue Wang, Binhang Yuan, Luka Rimanic, Yongjun He, Tri Dao, Beidi Chen, Christopher Ré, Ce Zhang:
Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees. NeurIPS 2022 - [c11]Binhang Yuan, Yongjun He, Jared Davis, Tianyi Zhang, Tri Dao, Beidi Chen, Percy Liang, Christopher Ré, Ce Zhang:
Decentralized Training of Foundation Models in Heterogeneous Environments. NeurIPS 2022 - [i16]Tri Dao, Beidi Chen, Nimit Sharad Sohoni, Arjun D. Desai, Michael Poli, Jessica Grogan, Alexander Liu, Aniruddh Rao, Atri Rudra, Christopher Ré:
Monarch: Expressive Structured Matrices for Efficient and Accurate Training. CoRR abs/2204.00595 (2022) - [i15]Binhang Yuan, Yongjun He, Jared Quincy Davis, Tianyi Zhang, Tri Dao, Beidi Chen, Percy Liang, Christopher Ré, Ce Zhang:
Decentralized Training of Foundation Models in Heterogeneous Environments. CoRR abs/2206.01288 (2022) - [i14]Jue Wang, Binhang Yuan, Luka Rimanic, Yongjun He, Tri Dao, Beidi Chen, Christopher Ré, Ce Zhang:
Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees. CoRR abs/2206.01299 (2022) - 2021
- [c10]Beidi Chen, Zichang Liu, Binghui Peng, Zhaozhuo Xu, Jonathan Lingjie Li, Tri Dao, Zhao Song, Anshumali Shrivastava, Christopher Ré:
MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training. ICLR 2021 - [c9]Tharun Medini, Beidi Chen, Anshumali Shrivastava:
SOLAR: Sparse Orthogonal Learned and Random Embeddings. ICLR 2021 - [c8]Shabnam Daghaghi, Tharun Medini, Nicholas Meisburger, Beidi Chen, Mengnan Zhao, Anshumali Shrivastava:
A Tale of Two Efficient and Informative Negative Sampling Distributions. ICML 2021: 2319-2329 - [c7]Beidi Chen, Tri Dao, Eric Winsor, Zhao Song, Atri Rudra, Christopher Ré:
Scatterbrain: Unifying Sparse and Low-rank Attention. NeurIPS 2021: 17413-17426 - [c6]Zhaozhuo Xu, Beidi Chen, Chaojian Li, Weiyang Liu, Le Song, Yingyan Lin, Anshumali Shrivastava:
Locality Sensitive Teaching. NeurIPS 2021: 18049-18062 - [i13]Beidi Chen, Tri Dao, Eric Winsor, Zhao Song, Atri Rudra, Christopher Ré:
Scatterbrain: Unifying Sparse and Low-rank Attention Approximation. CoRR abs/2110.15343 (2021) - [i12]Zhaozhuo Xu, Alan Baonan Ji, Andrew Woods, Beidi Chen, Anshumali Shrivastava:
Satellite Images and Deep Learning to Identify Discrepancy in Mailing Addresses with Applications to Census 2020 in Houston. CoRR abs/2111.06562 (2021) - [i11]Beidi Chen, Tri Dao, Kaizhao Liang, Jiaming Yang, Zhao Song, Atri Rudra, Christopher Ré:
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models. CoRR abs/2112.00029 (2021) - 2020
- [c5]Beidi Chen, Weiyang Liu, Zhiding Yu, Jan Kautz, Anshumali Shrivastava, Animesh Garg, Animashree Anandkumar:
Angular Visual Hardness. ICML 2020: 1637-1648 - [c4]Beidi Chen, Tharun Medini, James Farwell, Sameh Gobriel, Tsung-Yuan Charlie Tai, Anshumali Shrivastava:
SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems. MLSys 2020 - [i10]Zichang Liu, Zhaozhuo Xu, Alan Baonan Ji, Jonathan Li, Beidi Chen, Anshumali Shrivastava:
Climbing the WOL: Training for Cheaper Inference. CoRR abs/2007.01230 (2020) - [i9]Tharun Medini, Beidi Chen, Anshumali Shrivastava:
SOLAR: Sparse Orthogonal Learned and Random Embeddings. CoRR abs/2008.13225 (2020) - [i8]Shabnam Daghaghi, Tharun Medini, Beidi Chen, Mengnan Zhao, Anshumali Shrivastava:
A Constant-time Adaptive Negative Sampling. CoRR abs/2012.15843 (2020)
2010 – 2019
- 2019
- [c3]Beidi Chen, Yingchen Xu, Anshumali Shrivastava:
Fast and Accurate Stochastic Gradient Estimation. NeurIPS 2019: 12339-12349 - [i7]Beidi Chen, Tharun Medini, Anshumali Shrivastava:
SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems. CoRR abs/1903.03129 (2019) - [i6]Beidi Chen, Yingchen Xu, Anshumali Shrivastava:
Lsh-sampling Breaks the Computation Chicken-and-egg Loop in Adaptive Stochastic Gradient Estimation. CoRR abs/1910.14162 (2019) - [i5]Beidi Chen, Weiyang Liu, Animesh Garg, Zhiding Yu, Anshumali Shrivastava, Jan Kautz, Anima Anandkumar:
Angular Visual Hardness. CoRR abs/1912.02279 (2019) - [i4]M. Sadegh Riazi, Beidi Chen, Anshumali Shrivastava, Dan S. Wallach, Farinaz Koushanfar:
Sub-Linear Privacy-Preserving Near-Neighbor Search. IACR Cryptol. ePrint Arch. 2019: 1222 (2019) - 2018
- [c2]Beidi Chen, Yingchen Xu, Anshumali Shrivastava:
Lsh-Sampling breaks the Computational chicken-and-egg Loop in adaptive stochastic Gradient estimation. ICLR (Workshop) 2018 - [c1]Beidi Chen, Anshumali Shrivastava:
Densified Winner Take All (WTA) Hashing for Sparse Datasets. UAI 2018: 906-916 - 2017
- [i3]Beidi Chen, Anshumali Shrivastava, Rebecca C. Steorts:
Unique Entity Estimation with Application to the Syrian Conflict. CoRR abs/1710.02690 (2017) - 2016
- [i2]Beidi Chen, Anshumali Shrivastava:
Revisiting Winner Take All (WTA) Hashing for Sparse Datasets. CoRR abs/1612.01834 (2016) - [i1]M. Sadegh Riazi, Beidi Chen, Anshumali Shrivastava, Dan S. Wallach, Farinaz Koushanfar:
Sub-linear Privacy-preserving Search with Untrusted Server and Semi-honest Parties. CoRR abs/1612.01835 (2016)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-05 21:00 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint