Advanced International Journal for Research

E-ISSN: 3048-7641     Impact Factor: 9.11

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 7, Issue 1 (January-February 2026) Submit your research before last 3 days of February to publish your research paper in the issue of January-February.

Lightweight and Data-Efficient Fine-Tuning of Language Models for Bidirectional Text-to-SQL and SQL-to-Text Tasks: A Systematic Review

Author(s) Ms. Bhavana Naik Sangamnerkar, Dr. Swati Namdev
Country India
Abstract Large language models (LLMs), which are increasingly essential to ultramodern natural language processing, have greatly improved semantic parsing, textbook creation, and natural language interfaces to databases. Among these procedures, text-to-SQL and SQL-to-text tasks are essential for providing user-friendly access to structured data. However, adoption in real-time and resource-constrained contexts is limited since standard complete LLM fine-tuning is still computationally costly and data-intensive. This systematic review highlights Scopus-listed research on featherlight and data-effective fine-tuning techniques published between 2020 and 2025, with a focus on bidirectional Text-to-SQL and SQL-to-Text workloads. appendages, bias-only tuning, low-rank adaptation (LoRA), prefix-tuning, and quantization-aware fine-tuning are among the parameter-efficient fine-tuning (PEFT) techniques that are examined in this review. Furthermore, data-efficient learning frameworks and classifier-guided data selection are examined as ways to lower annotation costs without sacrificing performance. Applications in cybersecurity for smart cities, low-resource database systems, and enterprise analytics are critically examined. The review comes to the conclusion that combining parameter-efficient adaptation with classifier-guided learning provides a scalable, sustainable, and energy-conscious way to implement language models in structured data environments. Modern LLMs exhibit strong generalization across a variety of tasks, such as question answering, summarization, and structured text generation, after being pretrained on extensive quantities.
Keywords Parameter-Efficient Fine-Tuning, Text-to-SQL, SQL-to-Text, Classifier-Guided Learning, LoRA, Semantic Parsing, PEFT.
Field Computer > Data / Information
Published In Volume 6, Issue 6, November-December 2025
Published On 2025-12-30
DOI https://doi.org/10.63363/aijfr.2025.v06i06.2745
Short DOI https://doi.org/hbhk4b

Share this