Call Center Banner
Pakistan Railways Call Center
Web Banner
Online Booking Now Available
Train Service
Comfortable Travel Experience
Special Offer
Special Discounts Available

Find Your Train


Auto-Seed VL2 maintains a set of auto-generated seeds ( \mathcalS ) that grows slowly over tasks. Auto-Seed VL2 operates in three phases per task: (1) Seed replay, (2) Online adaptation, (3) Seed update. 4.1 Overall Architecture

The consistency loss and gradient-conditioned generation are crucial. Seed pruning is memory-efficient without hurting accuracy. We measure FWT: performance on task ( t ) after training on tasks ( 1..t-1 ). Auto-Seed VL2 achieves positive forward transfer (FWT = +4.1%) on VL-CL, meaning seeds from earlier tasks help learn new tasks. ER-VLM shows near-zero FWT; generative replay shows negative transfer due to noisy synthetic images. 7. Analysis and Discussion What do generated seeds encode? We project seeds into CLIP space and compare to real class means. The cosine similarity is 0.89 ± 0.05, indicating faithful representation. However, seeds are more “regularized” – they have lower variance along task-irrelevant directions.

. A seed is a tuple ( s = (v, w) ), where ( v \in \mathbbR^d ) is a visual prototype and ( w \in \mathbbR^d ) is a textual prototype, such that for any example ( (x, y) ) from a past task, ( |f_I(x) - v| ) and ( |f_T(y) - w| ) are small, and ( \textsim(v, w) ) is high.

This paper is written in a standard academic format (abstract, introduction, methodology, experiments, results, conclusion) and assumes a novel contribution to the fields of continual learning and vision-language models. Author Names Redacted for Blind Review Affiliation Redacted Abstract Vision-Language Models (VLMs) have demonstrated remarkable zero-shot capabilities but suffer from catastrophic forgetting when sequentially fine-tuned on downstream tasks. Traditional continual learning (CL) methods rely on either exemplar replay (which raises privacy concerns) or static prompt pools (which lack adaptability to novel task distributions). We introduce Auto-Seed VL2 , a novel framework for autonomous seed generation that dynamically synthesizes "seed" embeddings—compact, task-representative vectors—without storing real data. Auto-Seed VL2 employs a lightweight meta-generator conditioned on task-specific gradients and a contrastive consistency mechanism to align generated seeds with both visual and textual manifolds. Extensive experiments on four challenging VLM continual learning benchmarks (CIFAR-100 to ImageNet-R, COCO Captions to Flickr30k) show that Auto-Seed VL2 outperforms state-of-the-art methods by 8.7% in average accuracy while reducing memory overhead by 95% compared to exemplar replay. Our analysis further reveals that auto-generated seeds capture inter-task transferable features, enabling forward transfer without explicit rehearsal. 1. Introduction Large-scale pre-trained Vision-Language Models (e.g., CLIP, ALIGN, Flava) have become foundational backbones for multimodal understanding. However, real-world deployment requires these models to adapt continuously to new tasks—new visual domains, novel object categories, or unseen captioning styles—without forgetting previously learned knowledge. This setting, known as Continual Learning (CL), is particularly challenging for VLMs due to the intertwined nature of their dual encoders.

: Auto-Seed VL2 outperforms all baselines, including ER-VLM with 10× more memory, and beats generative replay by over 13 points on average. The BLEU-4 score on C→F is particularly striking, indicating that generated seeds capture caption semantics well. 6.2 Ablation Study Removing components from Auto-Seed VL2 on C→R:

[3] Zhou, K., et al. (2022). Learning to prompt for vision-language models. IJCV.

By generating seeds in embedding space rather than pixel space, we avoid the compounding errors of full image generation. The hypernetwork’s meta-learning objective ensures that seeds are discriminative for the original task and compatible with the continually updated VLM.

[4] Thengane, V., et al. (2023). Continual-CLIP: Fine-tuning CLIP for continual learning. CVPR Workshop.

Our Services

Train Schedule

Check complete schedule of all trains across Pakistan

Online Booking

Book your tickets online from the comfort of your home

Live Tracking

Track your train in real-time on our interactive map

Train Status

Check if your train is on time, delayed or cancelled

Explore Pakistan by Rail


auto seed vl2
auto seed vl2
auto seed vl2
View All Stations

Auto Seed Vl2 May 2026

Auto-Seed VL2 maintains a set of auto-generated seeds ( \mathcalS ) that grows slowly over tasks. Auto-Seed VL2 operates in three phases per task: (1) Seed replay, (2) Online adaptation, (3) Seed update. 4.1 Overall Architecture

The consistency loss and gradient-conditioned generation are crucial. Seed pruning is memory-efficient without hurting accuracy. We measure FWT: performance on task ( t ) after training on tasks ( 1..t-1 ). Auto-Seed VL2 achieves positive forward transfer (FWT = +4.1%) on VL-CL, meaning seeds from earlier tasks help learn new tasks. ER-VLM shows near-zero FWT; generative replay shows negative transfer due to noisy synthetic images. 7. Analysis and Discussion What do generated seeds encode? We project seeds into CLIP space and compare to real class means. The cosine similarity is 0.89 ± 0.05, indicating faithful representation. However, seeds are more “regularized” – they have lower variance along task-irrelevant directions.

. A seed is a tuple ( s = (v, w) ), where ( v \in \mathbbR^d ) is a visual prototype and ( w \in \mathbbR^d ) is a textual prototype, such that for any example ( (x, y) ) from a past task, ( |f_I(x) - v| ) and ( |f_T(y) - w| ) are small, and ( \textsim(v, w) ) is high.

This paper is written in a standard academic format (abstract, introduction, methodology, experiments, results, conclusion) and assumes a novel contribution to the fields of continual learning and vision-language models. Author Names Redacted for Blind Review Affiliation Redacted Abstract Vision-Language Models (VLMs) have demonstrated remarkable zero-shot capabilities but suffer from catastrophic forgetting when sequentially fine-tuned on downstream tasks. Traditional continual learning (CL) methods rely on either exemplar replay (which raises privacy concerns) or static prompt pools (which lack adaptability to novel task distributions). We introduce Auto-Seed VL2 , a novel framework for autonomous seed generation that dynamically synthesizes "seed" embeddings—compact, task-representative vectors—without storing real data. Auto-Seed VL2 employs a lightweight meta-generator conditioned on task-specific gradients and a contrastive consistency mechanism to align generated seeds with both visual and textual manifolds. Extensive experiments on four challenging VLM continual learning benchmarks (CIFAR-100 to ImageNet-R, COCO Captions to Flickr30k) show that Auto-Seed VL2 outperforms state-of-the-art methods by 8.7% in average accuracy while reducing memory overhead by 95% compared to exemplar replay. Our analysis further reveals that auto-generated seeds capture inter-task transferable features, enabling forward transfer without explicit rehearsal. 1. Introduction Large-scale pre-trained Vision-Language Models (e.g., CLIP, ALIGN, Flava) have become foundational backbones for multimodal understanding. However, real-world deployment requires these models to adapt continuously to new tasks—new visual domains, novel object categories, or unseen captioning styles—without forgetting previously learned knowledge. This setting, known as Continual Learning (CL), is particularly challenging for VLMs due to the intertwined nature of their dual encoders.

: Auto-Seed VL2 outperforms all baselines, including ER-VLM with 10× more memory, and beats generative replay by over 13 points on average. The BLEU-4 score on C→F is particularly striking, indicating that generated seeds capture caption semantics well. 6.2 Ablation Study Removing components from Auto-Seed VL2 on C→R:

[3] Zhou, K., et al. (2022). Learning to prompt for vision-language models. IJCV.

By generating seeds in embedding space rather than pixel space, we avoid the compounding errors of full image generation. The hypernetwork’s meta-learning objective ensures that seeds are discriminative for the original task and compatible with the continually updated VLM.

[4] Thengane, V., et al. (2023). Continual-CLIP: Fine-tuning CLIP for continual learning. CVPR Workshop.

Pakistan Railways Fare Calculator

Calculate exact ticket prices based on official Pakistan Railways fares

Fare Breakdown

Enter your journey details to calculate the exact fare auto seed vl2

Fare Information

  • Children under 5 travel free (without seat)
  • Fares updated as per PR official rates (2024)
  • Dynamic pricing may apply during peak seasons

Popular Route Fares (One Way)

Karachi to Lahore From Rs. 2,800
Economy Class • ~18 hours
Karakoram Express, Shalimar Express
Lahore to Islamabad From Rs. 1,200
AC Business • ~4.5 hours
Subak Raftar, Subak Kharam
Karachi to Quetta From Rs. 3,500
AC Sleeper • ~22 hours
Jaffar Express
Islamabad to Karachi From Rs. 4,200
Green Line • ~20 hours
Green Line Express
Lahore to Peshawar From Rs. 1,800
AC Standard • ~8 hours
Awam Express, Khyber Mail
Karachi to Multan From Rs. 2,500
Economy Class • ~16 hours
Millat Express
Rawalpindi to Quetta From Rs. 3,800
AC Sleeper • ~25 hours
Bolan Mail
Faisalabad to Karachi From Rs. 3,200
AC Standard • ~19 hours
Faisal Express
Peshawar to Lahore From Rs. 1,700
AC Business • ~7.5 hours
Khyber Mail, Awam Express

Fares shown are approximate and may vary by train. Children (5-11) travel at 50% fare. Auto-Seed VL2 maintains a set of auto-generated seeds

Pakistan Railways Online Booking

Book your train tickets in just 3 easy steps

1
Select Journey
2
Choose Seats
3
Payment

Journey Details

Booking Policy

  • Tickets can be booked up to 30 days in advance
  • 50% discount for children aged 5-11 years
  • Free travel for infants below 5 years (without seat)
  • Refunds available up to 6 hours before departure with 15% deduction
  • Original CNIC/Passport required during travel

Major Railway Stations of Pakistan

Lahore Railway Station

Lahore Junction (LHR)

Established: 1860

A+ Category 150+ Daily Trains

The largest and busiest railway station in Pakistan, serving as the main hub for all northbound trains. Features British colonial architecture and recently renovated facilities.

Lahore Junction Railway Station, Empress Road, Lahore
042-99201116
Open 24/7

Facilities:

Free WiFi Food Court Waiting Lounges Parking Accessibility

Major Trains:

  • Karakoram Express
  • Shalimar Express
  • Allama Iqbal Express
  • Subak Raftar
Karachi Cantt Station

Karachi City (KHI)

Established: 1898

A+ Category 120+ Daily Trains

The main railway terminus of Karachi and primary station for all southbound trains. Features modern facilities and serves as the gateway to southern Pakistan.

Karachi City Station, Dr. Daud Pota Road, Karachi
021-99213311
Open 24/7

Facilities:

Free WiFi Food Court Luggage Storage Taxi Stand Medical Room

Major Trains:

  • Green Line Express
  • Awam Express
  • Karachi Express
  • Millat Express
Rawalpindi Station

Rawalpindi (RWP)

Established: 1881

A Category 80+ Daily Trains

The main railway station serving the twin cities of Rawalpindi and Islamabad. Recently upgraded with modern facilities and serves as the terminus for northern routes.

Rawalpindi Railway Station, Saddar, Rawalpindi
051-9330201
Open 24/7

Facilities:

Car Rental Hotel Booking Shopping Mall Baby Care

Major Trains:

  • Green Line Express
  • Subak Kharam
  • Sir Syed Express
  • Margalla Express
View All 130 Stations

Contact Pakistan Railways

Reach out to us for inquiries, complaints, or feedback

Headquarters

Pakistan Railways Headquarters,
Near Lahore Railway Station,
Lahore, Pakistan

+92 42 99201116-20

Monday to Friday
9:00 AM to 5:00 PM

Follow Us

Helpline & Contacts

24/7 Helpline

117 (from landline)

0300-8008787 (from mobile)

Reservation

+92 42 99203145

Complaints

+92 42 99201232

Freight Services

+92 42 99201251

Lost & Found

+92 42 99201240

Send Us a Message

Regional Offices

Karachi Division

Karachi Cantt Station

+92 21 99213311

[email protected]

Lahore Division

Lahore Junction Station

+92 42 99203145

[email protected]

Rawalpindi Division

Rawalpindi Railway Station

+92 51 9330201

[email protected]

Quetta Division

Quetta Railway Station

+92 81 9201601

[email protected]