Advanced International Journal for Research
E-ISSN: 3048-7641
•
Impact Factor: 9.11
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with AIJFR
Upcoming Conference(s) ↓
WSMCDD-2025
GSMCDD-2025
Conferences Published ↓
RBS:RH-COVID-19 (2023)
ICMRS'23
PIPRDA-2023
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 7 Issue 1
January-February 2026
Indexing Partners
Lightweight and Data-Efficient Fine-Tuning of Language Models for Bidirectional Text-to-SQL and SQL-to-Text Tasks: A Systematic Review
| Author(s) | Ms. Bhavana Naik Sangamnerkar, Dr. Swati Namdev |
|---|---|
| Country | India |
| Abstract | Large language models (LLMs), which are increasingly essential to ultramodern natural language processing, have greatly improved semantic parsing, textbook creation, and natural language interfaces to databases. Among these procedures, text-to-SQL and SQL-to-text tasks are essential for providing user-friendly access to structured data. However, adoption in real-time and resource-constrained contexts is limited since standard complete LLM fine-tuning is still computationally costly and data-intensive. This systematic review highlights Scopus-listed research on featherlight and data-effective fine-tuning techniques published between 2020 and 2025, with a focus on bidirectional Text-to-SQL and SQL-to-Text workloads. appendages, bias-only tuning, low-rank adaptation (LoRA), prefix-tuning, and quantization-aware fine-tuning are among the parameter-efficient fine-tuning (PEFT) techniques that are examined in this review. Furthermore, data-efficient learning frameworks and classifier-guided data selection are examined as ways to lower annotation costs without sacrificing performance. Applications in cybersecurity for smart cities, low-resource database systems, and enterprise analytics are critically examined. The review comes to the conclusion that combining parameter-efficient adaptation with classifier-guided learning provides a scalable, sustainable, and energy-conscious way to implement language models in structured data environments. Modern LLMs exhibit strong generalization across a variety of tasks, such as question answering, summarization, and structured text generation, after being pretrained on extensive quantities. |
| Keywords | Parameter-Efficient Fine-Tuning, Text-to-SQL, SQL-to-Text, Classifier-Guided Learning, LoRA, Semantic Parsing, PEFT. |
| Field | Computer > Data / Information |
| Published In | Volume 6, Issue 6, November-December 2025 |
| Published On | 2025-12-30 |
| DOI | https://doi.org/10.63363/aijfr.2025.v06i06.2745 |
| Short DOI | https://doi.org/hbhk4b |
Share this

E-ISSN 3048-7641
CrossRef DOI is assigned to each research paper published in our journal.
AIJFR DOI prefix is
10.63363/aijfr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.