LinkedIn API

LinkedIn Job Search API: How to Find Jobs Programmatically in 2025

Learn how to search for jobs on LinkedIn programmatically using the Fresh LinkedIn Scraper API. Complete guide with code examples for job search and detailed job information.

LinkedIn Job Search API: How to Find Jobs Programmatically in 2025

LinkedIn Job Search API: How to Find Jobs Programmatically in 2025

In today's competitive job market, accessing LinkedIn job listings programmatically can provide a significant advantage for job seekers, recruiters, and HR analytics platforms. This comprehensive guide shows you how to use the Fresh LinkedIn Scraper API to search for jobs and retrieve detailed job information without the limitations of LinkedIn's official API.

Table of Contents

Why Use a LinkedIn Job Search API?

Programmatic access to LinkedIn job data offers numerous benefits:

  • Automated Job Monitoring: Track new positions matching specific criteria
  • Custom Job Alerts: Build personalized notification systems beyond LinkedIn's native alerts
  • Job Market Analysis: Analyze hiring trends, salary ranges, and required skills
  • ATS Integration: Incorporate LinkedIn job data into applicant tracking systems
  • Job Aggregation: Create custom job boards with LinkedIn listings

Challenges with LinkedIn's Official Job API

LinkedIn's official API has several limitations that make it challenging for developers:

  • Requires complex OAuth authentication and user authorization
  • Limited endpoints specifically for job listings
  • Restrictive rate limits and usage policies
  • Lengthy application approval process
  • Limited data compared to what's visible on public job listings

The Fresh LinkedIn Scraper API provides a developer-friendly alternative that simplifies access to LinkedIn job data:

  • No OAuth Required: Direct access without complex authentication flows
  • Comprehensive Job Data: Access to detailed job listings similar to what you see on LinkedIn
  • Simple Integration: RESTful API with straightforward endpoints for job search and details
  • Flexible Rate Limits: Various plans to accommodate different usage volumes
  • Reliable Performance: Fast response times with high availability

Setting Up API Access

To get started with the Fresh LinkedIn Scraper API for job search:

  1. Visit Fresh LinkedIn Scraper API on RapidAPI
  2. Create a RapidAPI account if you don't already have one
  3. Subscribe to a plan that matches your usage needs
  4. Once subscribed, you'll receive your API key to use in all requests

Searching for Jobs with the API

The job search endpoint allows you to search for jobs using various parameters:

Job Search Endpoint

GET https://fresh-linkedin-scraper-api.p.rapidapi.com/api/v1/job/search

Query Parameters

  • keyword (string, required): Search keyword for job titles or descriptions. Minimum length: 1. Example: "Backend Developer"
  • page (integer, default: 1): Page number for pagination. Required range: x > 0. Example: 1
  • sort_by (string, default: "recent"): Sort jobs by recent or relevance. Available options: recent, relevant Example: "recent"
  • date_posted (string): Filter jobs by the date they were posted. Available options: anytime, past_month, past_week, past_24_hours Example: "anytime"
  • geocode (string): Geographical code for location-based search. Example: "103644278"
  • company (string): Filter jobs by company ID. Example: "1441"
  • experience_level (string): Filter jobs by required experience level. Available options: internship, entry_level, associate, mid_senior, director, executive Example: "internship"
  • remote (string): Filter jobs by work location type. Available options: onsite, remote, hybrid Example: "onsite"
  • job_type (string): Filter jobs by type of employment. Available options: full_time, part_time, contract, temporary, volunteer, internship, other Example: "full_time"
  • easy_apply (boolean): Filter jobs that are easy to apply for. Example: true
  • has_verifications (boolean): Filter jobs that have company verifications. Example: true
  • under_10_applicants (boolean): Filter jobs with fewer than 10 applicants. Example: true
  • fair_chance_employer (boolean): Filter jobs from fair chance employers. Example: true
curl --request GET \
	--url 'https://fresh-linkedin-scraper-api.p.rapidapi.com/api/v1/job/search?keyword=backend&page=1&sort_by=recent&date_posted=past_week&remote=remote' \
	--header 'x-rapidapi-host: fresh-linkedin-scraper-api.p.rapidapi.com' \
	--header 'x-rapidapi-key: YOUR_API_KEY'

Retrieving Detailed Job Information

Once you've found job listings of interest, you can retrieve detailed information about specific jobs:

Job Detail Endpoint

GET https://fresh-linkedin-scraper-api.p.rapidapi.com/api/v1/job/detail

Query Parameters

  • job_id (required): LinkedIn job ID (e.g., "4172815660")

Example API Call for Job Details

curl --request GET \
	--url 'https://fresh-linkedin-scraper-api.p.rapidapi.com/api/v1/job/detail?job_id=4172815660' \
	--header 'x-rapidapi-host: fresh-linkedin-scraper-api.p.rapidapi.com' \
	--header 'x-rapidapi-key: YOUR_API_KEY'

Code Implementation Examples

Here are implementation examples in popular programming languages:

Node.js Example

const axios = require("axios");

// Function to search for jobs
async function searchJobs(keyword, options = {}) {
  const defaultParams = {
    keyword: keyword,
    page: 1,
    sort_by: "recent",
  };

  const params = { ...defaultParams, ...options };

  const requestOptions = {
    method: "GET",
    url: "https://fresh-linkedin-scraper-api.p.rapidapi.com/api/v1/job/search",
    params: params,
    headers: {
      "x-rapidapi-host": "fresh-linkedin-scraper-api.p.rapidapi.com",
      "x-rapidapi-key": "YOUR_API_KEY",
    },
  };

  try {
    const response = await axios.request(requestOptions);
    return response.data;
  } catch (error) {
    console.error("Error searching jobs:", error);
    throw error;
  }
}

// Function to get job details
async function getJobDetails(jobId) {
  const options = {
    method: "GET",
    url: "https://fresh-linkedin-scraper-api.p.rapidapi.com/api/v1/job/detail",
    params: {
      job_id: jobId,
    },
    headers: {
      "x-rapidapi-host": "fresh-linkedin-scraper-api.p.rapidapi.com",
      "x-rapidapi-key": "YOUR_API_KEY",
    },
  };

  try {
    const response = await axios.request(options);
    return response.data;
  } catch (error) {
    console.error("Error getting job details:", error);
    throw error;
  }
}

// Example usage
async function main() {
  try {
    // Search for remote backend jobs posted in the last week
    const searchOptions = {
      sort_by: "recent",
      date_posted: "past_week",
      remote: "remote",
      experience_level: "mid_senior",
      job_type: "full_time",
    };

    const searchResults = await searchJobs("backend", searchOptions);
    console.log(`Found ${searchResults.total} jobs for "backend"`);

    if (searchResults.data && searchResults.data.length > 0) {
      // Get the first job's ID
      const firstJobId = searchResults.data[0].id;

      // Get detailed information about the job
      const jobDetails = await getJobDetails(firstJobId);
      console.log("Job Title:", jobDetails.data.title);
      console.log("Company:", jobDetails.data.company.name);
      console.log("Location:", jobDetails.data.location);
      console.log(
        "Description:",
        jobDetails.data.description.substring(0, 200) + "..."
      );
    }
  } catch (error) {
    console.error("Error in main function:", error);
  }
}

main();

Python Example

import requests

def search_jobs(keyword, **kwargs):
    url = "https://fresh-linkedin-scraper-api.p.rapidapi.com/api/v1/job/search"

    # Default parameters
    params = {
        "keyword": keyword,
        "page": 1,
        "sort_by": "recent"
    }

    # Update with any additional parameters
    params.update(kwargs)

    headers = {
        "x-rapidapi-host": "fresh-linkedin-scraper-api.p.rapidapi.com",
        "x-rapidapi-key": "YOUR_API_KEY"
    }

    response = requests.get(url, headers=headers, params=params)

    if response.status_code == 200:
        return response.json()
    else:
        response.raise_for_status()

def get_job_details(job_id):
    url = "https://fresh-linkedin-scraper-api.p.rapidapi.com/api/v1/job/detail"

    querystring = {"job_id": job_id}

    headers = {
        "x-rapidapi-host": "fresh-linkedin-scraper-api.p.rapidapi.com",
        "x-rapidapi-key": "YOUR_API_KEY"
    }

    response = requests.get(url, headers=headers, params=querystring)

    if response.status_code == 200:
        return response.json()
    else:
        response.raise_for_status()

# Example usage
if __name__ == "__main__":
    try:
        # Search for data science jobs with specific filters
        search_params = {
            "date_posted": "past_month",
            "experience_level": "entry_level",
            "remote": "hybrid",
            "easy_apply": True,
            "under_10_applicants": True
        }

        search_results = search_jobs("data science", **search_params)
        print(f"Found {search_results['total']} jobs for 'data science'")

        if search_results.get("data") and len(search_results["data"]) > 0:
            # Get the first job's ID
            first_job_id = search_results["data"][0]["id"]

            # Get detailed information about the job
            job_details = get_job_details(first_job_id)
            print(f"Job Title: {job_details['data']['title']}")
            print(f"Company: {job_details['data']['company']['name']}")
            print(f"Location: {job_details['data']['location']}")
            description = job_details['data']['description']
            print(f"Description: {description[:200]}...")
    except Exception as e:
        print(f"Error: {e}")

Understanding API Responses

Job Search Response Structure

The job search API returns a JSON response with the following structure:

{
  "success": true,
  "message": "success",
  "process_time": 1443,
  "cost": 1,
  "page": 1,
  "total": 70265,
  "has_more": true,
  "data": [
    {
      "id": "4196691659",
      "title": "IT Intern",
      "url": "https://www.linkedin.com/jobs/view/4196691659",
      "listed_at": "2025-03-29T20:46:47.000Z",
      "is_promote": false,
      "is_easy_apply": false,
      "location": "Alexandria, VA (Remote)",
      "company": {
        "id": "113436",
        "name": "The Columbia Group",
        "url": "https://www.linkedin.com/company/113436",
        "verified": false,
        "logo": [
          {
            "width": 200,
            "height": 200,
            "url": "https://media.licdn.com/dms/image/v2/C560BAQH_6Vivkrb6Xw/company-logo_200_200/company-logo_200_200/0/1631384298070?e=1749081600&v=beta&t=wMQMnBaySu_xY-7JRh1fBdeyNAqDyicton0iLWYhGfI",
            "expires_at": 1749081600000
          }
        ]
      }
    }
    // Additional job listings...
  ],
  "metadata": {
    "keyword": "backend",
    "filter": {}
  }
}

Job Detail Response Structure

The job detail API returns detailed information about a specific job:

{
  "success": true,
  "message": "success",
  "process_time": 308,
  "cost": 1,
  "data": {
    "id": "4172815660",
    "title": "Data Center Engineer",
    "description": "Core Infrastructure Engineer...",
    "job_url": "https://www.linkedin.com/jobs/view/4172815660",
    "location": "Herndon, VA",
    "location_geocode": "104624893",
    "country_code": "US",
    "state": "CLOSED",
    "level": "Mid-Senior level",
    "employment_status": "Full-time",
    "new": false,
    "views": 37,
    "salary": {
      "max_salary": "100000.0",
      "min_salary": "100000.0",
      "currency": "USD",
      "pay_period": "YEARLY",
      "salary_exists": true
    },
    "work_remote_allowed": false,
    "job_limit_reached": false,
    "original_listed_at": "2025-03-04T14:56:19.000Z",
    "listed_at": "2025-03-04T14:56:19.000Z",
    "expire_at": "2025-04-03T14:56:18.000Z",
    "industries": ["IT Services and IT Consulting"],
    "job_functions": ["Information Technology"],
    "benefits": [],
    "workplace_types": ["On-site"],
    "third_party_sourced": false,
    "company": {
      "id": "86687933",
      "name": "Stelvio Group",
      "universal_name": "stelvio-group",
      "url": "https://www.linkedin.com/company/stelvio-group",
      "description": "Matching talented people with innovative companies across the US...",
      "follower_count": 53684,
      "staff_count": 21,
      "staff_range": {
        "start": 11,
        "end": 50
      },
      "headquarter": {
        "country": "US",
        "city": "Austin",
        "geographic_area": "Texas",
        "line1": "111 Congress Ave",
        "postal_code": "78701"
      },
      "specialities": [
        "Legacy IT",
        "IOT",
        "Industry 4.0",
        "IT Testing",
        "Programme Management",
        "Project Management",
        "Business Analysis",
        "Data Analysis",
        "Dev Ops",
        "Application Development",
        "Infrastructure",
        "Artificial Intelligence",
        "Recruitment",
        "cyber",
        "Software"
      ],
      "industries": ["Staffing and Recruiting"],
      "logo": [
        {
          "width": 200,
          "height": 200,
          "url": "https://media.licdn.com/dms/image/v2/D4E0BAQHjelLY2a-Mnw/company-logo_200_200/company-logo_200_200/0/1666083635765?e=1749081600&v=beta&t=8mjFgl8s2Cf5zBbndFTw3gQgXbQRQqmLAWkm49hHg4g",
          "expires_at": 1749081600000
        }
      ]
    }
  }
}

Best Practices and Rate Limiting

When using the Fresh LinkedIn Scraper API for job search, follow these best practices:

Implement Proper Error Handling

try {
  const response = await axios.request(options);
  return response.data;
} catch (error) {
  if (error.response) {
    // Server responded with error status code
    console.error(
      `Error ${error.response.status}: ${
        error.response.data.message || "Unknown error"
      }`
    );

    // Handle rate limiting
    if (error.response.status === 429) {
      console.log("Rate limit exceeded. Please try again later.");
    }
  } else if (error.request) {
    // Request made but no response received
    console.error("No response received from the API");
  } else {
    // Error in setting up the request
    console.error("Error setting up request:", error.message);
  }
  throw error;
}

Optimize Your API Usage

  1. Implement Caching: Store job search results and details to reduce API calls
  2. Use Pagination Efficiently: Only fetch additional pages when needed
  3. Apply Specific Filters: Use filters to narrow down search results instead of broad queries
  4. Monitor Your Usage: Keep track of your API calls through the RapidAPI dashboard
  5. Implement Backoff Strategy: If you hit rate limits, use an exponential backoff strategy
async function fetchWithRetry(fetchFunction, maxRetries = 3, baseDelay = 1000) {
  let retries = 0;

  while (retries < maxRetries) {
    try {
      return await fetchFunction();
    } catch (error) {
      if (
        error.response &&
        error.response.status === 429 &&
        retries < maxRetries - 1
      ) {
        // Calculate delay with exponential backoff
        const delay = baseDelay * Math.pow(2, retries);
        console.log(`Rate limit exceeded. Retrying in ${delay}ms...`);
        await new Promise((resolve) => setTimeout(resolve, delay));
        retries++;
      } else {
        throw error;
      }
    }
  }
}

// Usage example
const searchResults = await fetchWithRetry(() => searchJobs("backend", 1));

Conclusion

The Fresh LinkedIn Scraper API provides a powerful and flexible way to programmatically access LinkedIn job listings without the limitations of the official LinkedIn API. With straightforward endpoints for job search and detailed job information, you can build custom job tracking systems, perform job market analysis, or enhance your recruitment platforms.

By following the implementation examples and best practices outlined in this guide, you can efficiently integrate LinkedIn job data into your applications while ensuring optimal performance and reliability.

Remember to respect LinkedIn's terms of service and use the API responsibly to ensure continued access to this valuable data source for your job search and recruitment needs.