评价科学与工程会议投稿截止日期延至北京时间26日晚8点

文摘   科技互联网   2024-10-22 14:33   北京  


评价科学与评价工程研讨会(Evaluatology 2024)和国际评价基准与标准大会(Bench 2024) 联合在广州举办(12月4日-6日)。应投稿者要求,投稿截止时间延至北京时间2024年10月26日晚8点。研讨会聚焦任何一个领域的评价问题,仅要求提交一页纸摘要。经过评审、演讲和听众反馈后,完整文章将发表在 Bench 2024 会议论文集(Springer LNCS,EI索引)或 TBench 期刊(24年10月份实时CiteScore 12.8,EI索引)上。



重要日期和地点


研讨会日期: 2024年12月4日

研讨会地点: 广东省广州市·广州华工大学城中心酒店


摘要(1 页)截止日期: 北京时间2024年10月26日晚8点

提交稿件必须为英文!

通知日期: 2024年11月10日 24点(AOE

论文扩展截止日期: 2025年1月1日 24点(AOE


研讨会网址: https://www.benchcouncil.org/eva24/

论文提交网址:https://eva2024.hotcrp.com/




论文征稿



我们诚挚邀请来自各个领域的研究人员提交他们的工作,特别强调跨学科研究。无论您的研究侧重于计算机、人工智能(AI)、医学、教育、金融、商业、心理学或其他社会科学,我们都高度赞赏并鼓励相关的贡献。


为确保清晰简洁地聚焦于评价问题,请参阅“The Short Summary of Evaluatology: The Science and Engineering of Evaluation, J. F. Zhan, BenchCouncil Transactions on Benchmarks, Standards, and Evaluation(https://www.sciencedirect.com/science/article/pii/S2772485924000279)”


主题包括但不限于以下内容:

1. Evaluation theory and methodology

  • Formal specification of evaluation requirements

  • Development of evaluation models

  • Design and implementation of evaluation systems

  • Analysis of evaluation risk

  • Cost modeling for evaluations

  • Accuracy modeling for evaluations

  • Evaluation traceability

  • Identification and establishment of evaluation conditions

  • Equivalent evaluation conditions

  • Design of experiments

  • Statistical analysis techniques for evaluations

  • Methodologies and techniques for eliminating confounding factors in evaluations

  • Analytical modeling techniques and validation of models

  • Simulation and emulation-based modeling techniques and validation of models

  • Development of methodologies, metrics, abstractions, and algorithms specifically tailored for evaluations


2. The engineering of evaluation

  • Benchmark design and implementation

  • Benchmark traceability

  • Establishing the least equivalent evaluation conditions

  • Index design, implementation

  • Scale design, implementation

  • Evaluation standard design and implementations

  • Evaluation and benchmark practice

  • Tools for evaluations

  • Real-world evaluation systems

  • Testbed


3. Data set

  • Explicit or implicit problem definition deduced from the data set

  • Detailed descriptions of research or industry datasets, including the methods used to collect the data and technical analyses supporting the quality of the measurements

  • Analyses or meta-analyses of existing data

  • Systems, technologies, and techniques that advance data sharing and reuse to support reproducible research

  • Tools that generate large-scale data while preserving their original characteristics

  • Evaluating the rigor and quality of the experiments used to generate the data and the completeness of the data description


4. Benchmarking

  • Summary and review of state-of-the-art and state-of-the-practice

  • Searching and summarizing industry best practice

  • Evaluation and optimization of industry practice

  • Retrospective of industry practice

  • Characterizing and optimizing real-world applications and systems

  • Evaluations of state-of-the-art solutions in the real-world setting


5. Measurement and testing

  • Workload characterization

  • Instrumentation, sampling, tracing, and profiling of large-scale, real-world applications and systems

  • Collection and analysis of measurement and testing data that yield new insights

  • Measurement and testing-based modeling (e.g., workloads, scaling behavior, and assessment of performance bottlenecks)

  • Methods and tools to monitor and visualize measurement and testing data

  • Systems and algorithms that build on measurement and testing-based findings

  • Reappraisal of previous empirical measurements and measurement-based conclusions

  • Reappraisal of previous empirical testing and testing-based conclusions




程序委员会



程序委员会主席

·Jianfeng Zhan,  Chinese Academy of Sciences

·Weiping Li, Civil Aviation Flight University of China


程序委员会副主席

Maodeng Li, Beijing institute of control engineering, China

Wanling Gao, ICT, Chinese Academy of Sciences, China


技术程序委员会成员

Yadi Yu, National Center of Standards Evaluation, State Administration for Market Regulation, China

Geoffrey Fox, University of Virginia, USA

Chengliang Zhu, Institute of Quantitative & Technological Economics, Chinese Academy of Social Sciences

Yang Wang, Chinese Academy of Press and Publication, China

Wei Wang, East China Normal University, China

Hajdi Cenan, AIrt, Croatia

Jianming Tang, Hangzhou Dianzi University, China

Jialiang Tan, Lehigh University, USA

Dong Li, Zhongguancun Lab, China

Tong Wu, National Institute of Metrology, China

Davor Runje, AIrt, Croatia

Lei Wang, ICT, Chinese Academy of Sciences, China

Zhen Jia, Amazon, USA

Xingzhou Zhang, ICT, Chinese Academy of Sciences, China

Jungang Xu, University of Chinese Academy of Sciences, China

Yue Liu, Shanghai University, China

Yozo Dooymovich, San Francisco State University, USA

Weijun Zhong, CHINA Electronics Standardization Institute, China

Runsong Zhou, China Software Test Center, China

Hong Zhou, National Science Library (Wuhan), Chinese Academy of Sciences, China

Tilmann Rabl, Hasso Plattner Institute, University of Potsda, Germany

Yongjun Xu, ICT, Chinese Academy of Sciences, China




论文提交


Springer LNCS 格式,以PDF格式提交。

双盲评审(采用HotCRP投稿系统)。论文审稿强调研究的价值,而非论文页数。

摘要经录用后,至少一名作者注册会议,参会报告并发表论文。没有作者注册的论文将无法发表。


请确保论文提交版本符合以下所有条件: 

 论文必须描述未在其他刊物上发表,且未在其他会议或期刊评审中的工作。 

• 所有作者和附属信息必须匿名。

• 论文必须为可打印的 PDF 格式。

• 论文包含页码编号。

• 论文支持黑白打印,确保论文图表使用黑白打印之后的可读性。


LNCS latex template: https://www.benchcouncil.org/file/llncs2e.zip



BenchCouncil
国际测试委员会(BenchCouncil)是评价科学与工程理论的首倡者
 最新文章