The emergence of large language models has significantly advanced scientific research. Representative models such as ChatGPT and DeepSeek R1 have brought notable transformations to the paradigm of scientific research. While these models are general-purpose, they have demonstrated strong generalization capabilities in the field of batteries, particularly in solid-state battery research. In this study, we systematically screened 5,309,268 articles from key journals up to 2024, accurately extracting 124,021 relevant battery-related papers.Additionally, we comprehensively searched through 17,559,750 patent applications and granted patents from the European Patent Office and the United States Patent and Trademark Office up to 2024, from which we filtered out 125,716 battery-related patents. Utilizing this extensive collection of literature and patents, we conducted numerous experiments to evaluate the knowledge base, in context learning, instruction-following, and structured output capabilities of language models. Through multi-dimensional model evaluations and analyses, we found the following: first, the model exhibited high accuracy in screening literature on inorganic solid-state electrolytes, equivalent to the level of a doctoral student in the relevant field. Based on 10,604 data entries, the model demonstrated good recognition capabilities in identifying literature on in-situ polymerization/solidification technology. However, its understanding accuracy for this emerging technology was slightly lower than that for solid-state electrolytes, requiring further fine-tuning to improve accuracy. Second, through testing with 10,604 data entries, the model achieved reliable accuracy in extracting inorganic ionic conductivity data. Third, based on solid-state lithium battery patents from four companies in South Korea and Japan over the past 20 years, the model proved effective in analyzing historical patent trends and conducting comparative analyses. Furthermore, the model-generated personalized literature reports based on the latest publications also showed high accuracy.Fourth, by leveraging the model's iteration strategies, we enabled DeepSeek to engage in self-thinking, thereby providing more comprehensive responses. The research results indicate that language models possess strong capabilities in content summarization and trend analysis. However, we also observed that the model may occasionally exhibit issues with numerical hallucinations. Additionally, while processing vast amounts of battery-related data, the model still has room for optimization in engineering applications. Based on the characteristics of the model and the above test results, we utilized the DeepSeek V3-0324 model to extract data on inorganic solid electrolyte materials, including 5,970 entries of ionic conductivity, 387 entries of diffusion coefficients, and 3,094 entries of migration barriers. Additionally, it includes over 1,000 entries of data related to chemical, electrochemical, and mechanical properties, covering nearly all physical, chemical, and electrochemical properties associated with inorganic solid electrolytes. This also signifies that the application of large language models in scientific research has transitioned from assisting research to actively advancing its development. The datasets presented in this paper can be acess at the website: https://cmpdc.iphy.ac.cn/literature/SSE.html (DOI: https://doi.org/10.57760/sciencedb.j00213.00172).