Spiking Neural Networks (SNNs) exhibit significant potential in tasks such as dynamic vision, thanks to their characteristics of low power consumption, event-driven computing, and sparse computing. However, the algorithmic advantages of SNNs are still constrained by traditional computing architectures in practical deployments. To break through the hardware bottlenecks of event-driven computing in terms of energy efficiency and latency, this paper focuses on the Spikformer model and conducts algorithm and hardware co-optimization. We propose a general accelerator architecture for Spiking Transformer based on Field-Programmable Gate Array (FPGA). At the algorithmic level, by integrating convolutional layers with Batch Normalization (BN) layers and employing quantization-aware training, we compress the parameter size of the Spikformer-1-384 model from 15.92 MB to one-quarter of its original size, while keeping the accuracy loss within 1%. At the hardware level, a configurable accelerator tailored for spiking data streams is designed using Verilog. This accelerator supports parallel computation across multiple time steps and flexible combinations of convolutional, fully connected, residual, and attention operators, enhancing parallelism and storage bandwidth utilization efficiency. Experimental results show that on the Xilinx Zynq UltraScale+ MPSoC (xczu7ev-ffvc1156-2-i) platform, the accelerator achieves an end-to-end inference latency of approximately 53 ms on the CIFAR-10 dataset with a time step of 4. Specifically, the computation times for convolutional feature extraction and attention modules are 48 ms and 4.634 ms, respectively. The end-to-end system power consumption is 7.181 W, corresponding to an energy efficiency of 2.63 FPS/W, outperforming Intel i9 CPUs in both overall performance and energy efficiency. For computations involving self-attention mechanisms and Multilayer Perceptron (MLP), the accelerator achieves speedups of 1.70× and 5.73× compared to GPUs and CPUs, respectively. The open-source link for this project is: https://github.com/tooddler/FPGA_SpikingTransformer.