{"id":9935,"date":"2026-05-07T08:03:57","date_gmt":"2026-05-07T08:03:57","guid":{"rendered":"https:\/\/phantombuster.com\/blog\/?p=9935"},"modified":"2026-05-07T09:30:23","modified_gmt":"2026-05-07T09:30:23","slug":"linkedin-prospecting-benchmarks","status":"publish","type":"post","link":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/","title":{"rendered":"7 LinkedIn Prospecting Benchmarks Revenue Teams Should Track Before Choosing an Automation Tool"},"content":{"rendered":"<p>Most revenue teams evaluate LinkedIn automation tools the wrong way. They compare feature lists, read G2 reviews, and ask vendors about connection limits. This focuses on features instead of workflow impact on pipeline. If you can&#8217;t state your current acceptance rate, reply rate, positive reply rate, and meeting conversion rate from manual or low-volume outreach, you&#8217;re not ready to compare tools. You&#8217;re ready to measure. Treat tool evaluation like any GTM system choice: compare tools against normalized benchmarks that tie activity to pipeline, data quality, and account health. This article defines 7 measurable benchmarks, explains what each reveals about your prospecting system, and shows how to test them in a controlled pilot before you\u00a0commit.<\/p>\n<h2>Why feature comparisons fail as a selection method<\/h2>\n<p>Vendors cluster around the same talking points: personalization tokens, multi-channel sequences, CRM integrations, and reported outreach metrics. The implied promise is that the right tool creates performance on its own. This ignores process quality. Automation amplifies whatever process it touches. If targeting is weak, messaging is generic, or list quality is poor, automation scales those problems. You burn through your addressable market faster and add account risk without adding pipeline. Vendor benchmark claims also aren&#8217;t normalized. When a tool advertises &#8220;Our users see 40% acceptance rates,&#8221; that number doesn&#8217;t account for list source, targeting precision, message-audience fit, or account history. Two teams can use the same tool and see different outcomes because the workflow is the variable, not the product. As PhantomBuster product expert <a href=\"https:\/\/www.linkedin.com\/in\/brianejmoran\/\" target=\"_blank\" rel=\"noopener\">Brian Moran<\/a> puts it, LinkedIn reacts to patterns over time, not simple action counts. A tool that pushes high daily volume without pacing controls can increase restriction risk. The benchmarks that matter are the ones you can measure, normalize, and justify to leadership.<\/p>\n<h2>How to set a baseline before you evaluate any tool<\/h2>\n<p>Before you evaluate any tool, run a controlled manual or low-volume outreach process for 2 to 4 weeks. Track acceptance rate, reply rate, positive reply rate, and meetings booked. This gives you a baseline to compare against when you pilot automation. For example, if your manual acceptance rate is below 25%, the first fix is usually targeting, profile credibility, or invite relevance, not automation. <strong>Minimum baseline requirement:<\/strong><\/p>\n<ul>\n<li>Send 50\u2013100 connection requests manually or with minimal automation, within LinkedIn&#8217;s current policies and commercial limits.<\/li>\n<li>Track acceptance, reply, and meeting conversion rates.<\/li>\n<li>Document list source, message variants, and timing.<\/li>\n<li>Write down what &#8220;normal&#8221; looks like for that account before you scale.<\/li>\n<\/ul>\n<p>This baseline becomes your control group. Any tool you evaluate should improve outcomes without adding extra workload or account risk.<\/p>\n<h2>Benchmark 1: Connection acceptance rate<\/h2>\n<h3>Definition and formula<\/h3>\n<p>Connection acceptance rate is the percentage of connection requests accepted. <strong>Formula:<\/strong> (Accepted connections \u00f7 Connection requests sent) \u00d7 100<\/p>\n<h3>Why it matters for revenue leaders<\/h3>\n<p>Acceptance rate is the first step. A weak rate usually signals poor targeting, low profile credibility, or message-audience mismatch. Automation doesn&#8217;t fix those problems, it scales them. If acceptance is low, fix targeting and profile first before testing automation. <strong>Directional range:<\/strong><\/p>\n<ul>\n<li>Use 30\u201345% as a starting baseline for targeted outreach based on observed patterns.\u00a0Validate against your 2\u20134 week baseline before scaling.<\/li>\n<li>Below 25% often indicates a targeting or credibility issue.<\/li>\n<li>Above 50% usually means strong ICP fit and warm signals.<\/li>\n<\/ul>\n<h3>What a weak number usually signals<\/h3>\n<p>Acceptance rates below 25% often point to one of these issues:<\/p>\n<ol>\n<li><strong>Overly broad targeting:<\/strong> Job title-only targeting without firmographic or intent filters. Searching for &#8220;Marketing Manager&#8221; across all industries and company sizes tends to underperform because the audience is too generic.<\/li>\n<li><strong>Generic or missing invite notes:<\/strong> LinkedIn limits invite notes to 300 characters. If you don&#8217;t use that space to establish relevance, you&#8217;re relying on profile credibility alone.<\/li>\n<li><strong>A profile that doesn<\/strong>&#8216;<strong>t establish relevance:<\/strong>If your headline, about section, and recent activity don&#8217;t communicate why a prospect should connect, they&#8217;ll skip the invite. Your profile is your first message.<\/li>\n<\/ol>\n<h3>What to look for in a tool<\/h3>\n<p>Watch for behavior changes. Abrupt jumps in volume can trigger friction even if acceptance looks fine. Look for tools that offer:<\/p>\n<ul>\n<li><strong>Scheduling and cadence controls:<\/strong> Use scheduling and cadence controls to spread invites across working hours and avoid spikes. In PhantomBuster, set daily caps and randomized delays so patterns stay consistent with your baseline.<\/li>\n<li><strong>Acceptance tracking by list source:<\/strong>\u00a0Tag results by list source, variant, and time period.\u00a0With PhantomBuster exports, keep these tags in your dataset so you can compare variants in your CRM.<\/li>\n<li><strong>Invitation queue hygiene:<\/strong>LinkedIn limits pending invites. Keep the queue clear so it doesn&#8217;t block new outreach. You should be able to withdraw old pending invites automatically.<\/li>\n<\/ul>\n<h2>Benchmark 2: Reply rate<\/h2>\n<h3>Definition and formula<\/h3>\n<p>Reply rate is the percentage of accepted connections (or InMail recipients) who reply to your first or follow-up message. <strong>Formula:<\/strong> (Replies received \u00f7 Messages sent to accepted connections) \u00d7 100<\/p>\n<h3>Why it matters for revenue leaders<\/h3>\n<p>Reply rate measures whether your messaging gets responses after connecting with the prospect. If acceptance is healthy but the reply rate is weak, you likely have a messaging or sequencing issue, not a targeting issue. Acceptance tells you the prospect is open to connecting. Reply rate tells you your message is relevant enough to respond to. <strong>Directional range:<\/strong><\/p>\n<ul>\n<li>Start with 10\u201320% as a working target, then calibrate against your baseline. Avoid comparing reply rates directly to cold email.<\/li>\n<li>Below 10% often points to messaging, timing, or sequence design issues.<\/li>\n<\/ul>\n<h3>What a weak number usually signals<\/h3>\n<ol>\n<li><strong>The first message pitches too early:<\/strong> Most accepted connections are not ready to buy. If the first message is a pitch, reply rates drop and negative responses increase.<\/li>\n<li><strong>No follow-up sequence, or poorly timed follow-ups:<\/strong> Many replies come from follow-ups, not the first touch. If you send one message and stop, you&#8217;re missing the part of the sequence that usually does the work.<\/li>\n<li><strong>Templates that ignore prospect context:<\/strong> If your message could be sent to anyone, it won&#8217;t resonate with anyone. Even one specific detail can change the outcome.<\/li>\n<\/ol>\n<h3>What to look for in a tool<\/h3>\n<ul>\n<li><strong>Multi-step follow-up sequencing with stop-on-reply:<\/strong> Run 3\u20135 follow-ups with configurable delays\u00a0and stop-on-reply. In PhantomBuster Automations for LinkedIn, enable stop-on-reply so sequences end the moment someone responds.<\/li>\n<li><strong>Personalization fields <\/strong>from permitted<strong> profile or company data:<\/strong> Use permitted, publicly available profile or company fields (e.g., company, title, recent activity) to personalize at scale. With PhantomBuster, extract only needed fields and keep messages context-specific.<\/li>\n<li><strong>Reporting by step and message variant:<\/strong> Report by step and variant. Export PhantomBuster activity by campaign and step so you can see which touch earns replies. Without that, teams change copy based on opinion instead of results.<\/li>\n<\/ul>\n<h2>Benchmark 3: Positive reply rate<\/h2>\n<h3>Definition and formula<\/h3>\n<p>Positive reply rate is the percentage of replies that express genuine interest, ask a follow-up question, or agree to a next step. Exclude clear rejections, opt-outs, and purely neutral responses. <strong>Formula:<\/strong> (Positive replies \u00f7 Total replies) \u00d7 100<\/p>\n<h3>Why it matters for revenue leaders<\/h3>\n<p>A high reply rate doesn&#8217;t help if most replies are &#8220;Not interested&#8221; or &#8220;Please don&#8217;t contact me.&#8221; Positive reply rate separates engagement from pipeline signal. This metric shows alignment. If replies skew negative, your targeting may be close, but your framing is off. If replies skew neutral, you may be early, unclear, or offering the wrong next step. <strong>Directional range:<\/strong><\/p>\n<ul>\n<li>Use 25\u201335% of replies showing positive intent as a working baseline; validate against your own sample size and ICP.<\/li>\n<li>Below 20% often suggests messaging or targeting misalignment.<\/li>\n<li>Above 40% usually indicates strong message-market fit.<\/li>\n<\/ul>\n<h3>What a weak number usually signals<\/h3>\n<ol>\n<li><strong>Messaging that attracts responses but doesn<\/strong>&#8216;<strong>t qualify:<\/strong> You get attention, but not the right kind. This happens when the opening line creates curiosity but the body doesn&#8217;t deliver a credible reason to engage.<\/li>\n<li><strong>Targeting reaches the right titles, but the wrong buying context:<\/strong> If prospects consistently say &#8220;Not now,&#8221; you likely need better intent signals, tighter segmentation, or a nurture path instead of a meeting ask.<\/li>\n<li><strong>Outreach that feels automated or high-pressure:<\/strong> When messages read like templates, prospects respond defensively even if they match your ICP.<\/li>\n<\/ol>\n<h3>What to look for in a tool<\/h3>\n<ul>\n<li><strong>Reply categorization:<\/strong> Tag replies as positive, neutral, or negative and route next steps. Use PhantomBuster exports or webhooks to apply tags in your CRM and trigger tasks.<\/li>\n<li><strong>Controls to stop or pause sequences based on intent:<\/strong>\u00a0Use stop\/pause rules based on intent. In PhantomBuster, sync a &#8220;do-not-contact&#8221; tag so sequences pause automatically after a decline. Once someone declines, continuing the sequence damages your brand and wastes time.<\/li>\n<li><strong>Reporting on reply quality, not just reply count:<\/strong> If the tool only reports &#8220;replies,&#8221; you&#8217;ll optimize for the wrong thing.<\/li>\n<\/ul>\n<h2>Benchmark 4: Meeting booked rate<\/h2>\n<h3>Definition and formula<\/h3>\n<p>Meeting booked rate is the percentage of positive replies (or total outreach) that convert to a booked discovery call or demo. <strong>Formula:<\/strong> (Meetings booked \u00f7 Positive replies) \u00d7 100 <strong>Alternative formula:<\/strong> (Meetings booked \u00f7 Total outreach sent) \u00d7 100<\/p>\n<h3>Why it matters for revenue leaders<\/h3>\n<p>This measures conversions to meetings. If you can&#8217;t attribute meetings back to outreach, you can&#8217;t justify the investment internally or diagnose where conversion breaks down. Meeting booked rate also reveals handoff efficiency. If you get positive replies but don&#8217;t book meetings, the issue is often the transition from conversation to calendar. <strong>Directional range:<\/strong><\/p>\n<ul>\n<li>Use your baseline to set a realistic meeting conversion target. Focus on improving the share of positive replies that book time and track lift vs.\u00a0baseline.<\/li>\n<li>Sample size matters. Track this metric over at least 50\u2013100 positive replies before drawing conclusions.<\/li>\n<\/ul>\n<h3>What a weak number usually signals<\/h3>\n<ol>\n<li><strong>Handoff friction between LinkedIn and scheduling:<\/strong> Manual copying into CRM and calendars adds delays and drop-off. It also makes attribution unreliable.<\/li>\n<li><strong>Weak next-step framing:<\/strong> When a prospect shows interest, you need a clear, specific next step. Vague &#8220;Let&#8217;s find time&#8221; messages often lose momentum.<\/li>\n<li><strong>CRM sync gaps that break attribution and follow-up:<\/strong> If LinkedIn activity isn&#8217;t logged, you can&#8217;t connect campaigns to pipeline or enforce consistent follow-up.<\/li>\n<\/ol>\n<h3>What to look for in a tool<\/h3>\n<ul>\n<li><strong>CRM sync that keeps data current:<\/strong> Keep CRM current by creating or updating contacts from LinkedIn activity via native integrations or through PhantomBuster exports + Zapier\/Make.<\/li>\n<li><strong>Pipeline stage mapping:<\/strong>\u00a0Map touchpoints to stages by logging PhantomBuster exports to contact and deal records, then building a simple stage-movement report. You want to connect LinkedIn touchpoints to deal progression, not just top-of-funnel activity.<\/li>\n<li><strong>Exportable activity logs:<\/strong> Revenue leaders need an audit trail for reporting and governance. Logs should capture list source, timestamps, messages, replies, and outcomes.<\/li>\n<\/ul>\n<p>Use PhantomBuster to export outreach activity structured by list source and variant, then join it with CRM outcomes to close attribution gaps during the pilot.<\/p>\n<h2>Benchmark 5: Cost per qualified meeting<\/h2>\n<h3>Definition and formula<\/h3>\n<p>Cost per qualified meeting is the fully loaded cost to generate one qualified meeting through LinkedIn prospecting. <strong>Formula:<\/strong> (Tool cost + Rep time cost + Data and enrichment cost) \u00f7 Meetings booked<\/p>\n<h3>Why it matters for revenue leaders<\/h3>\n<p>Automation tools vary widely in price. If a tool doesn&#8217;t reduce cost per meeting or free up rep capacity for higher-value work, it becomes shelfware. This metric includes all costs. Many teams look only at subscription cost and ignore rep time, data costs, and the opportunity cost of manual work. <strong>Directional range:<\/strong><\/p>\n<ul>\n<li>Set a target CPM based on your ACV, close rate, and payback window. Use the formula to model against your acceptable CAC-to-LTV ratio, then test whether automation reduces CPM vs.\u00a0baseline.<\/li>\n<\/ul>\n<h3>What a weak number usually signals<\/h3>\n<ol>\n<li><strong>Tool cost isn<\/strong>&#8216;<strong>t offset by rep time savings:<\/strong> If reps still spend hours per day building lists, rewriting messages, or updating CRM records, the tool isn&#8217;t removing enough manual work.<\/li>\n<li><strong>Low conversion inflates cost per outcome:<\/strong> If you book 0 to 2 meetings per month, even a low-cost tool looks expensive. The primary fix is conversion, not pricing.<\/li>\n<li><strong>Data handling still happens manually:<\/strong> Manual CSV uploads, cleanup, and reconciliation can erase the time you expected to save.<\/li>\n<\/ol>\n<h3>What to look for in a tool<\/h3>\n<ul>\n<li><strong>Cloud execution:<\/strong> Run automations in the cloud so reps don&#8217;t babysit tabs. PhantomBuster executes in the cloud and continues even when a browser is closed.<\/li>\n<li><strong>Automation of repetitive steps:<\/strong> Prioritize workflow automation: list build, enrichment, follow-ups, and exports, not just clicking actions.<\/li>\n<li><strong>A pricing model you can forecast:<\/strong> Understand whether pricing scales by seat, usage, or capacity so you don&#8217;t discover cost spikes after rollout.<\/li>\n<\/ul>\n<h2>Benchmark 6: Sales cycle contribution<\/h2>\n<h3>Definition and formula<\/h3>\n<p>Sales cycle contribution is the measurable impact of LinkedIn touchpoints on deal velocity, either as a primary channel or as a nurture layer on existing pipeline. <strong>Formula:<\/strong> Compare average sales cycle length for deals with LinkedIn engagement vs. deals without.<\/p>\n<h3>Why it matters for revenue leaders<\/h3>\n<p>LinkedIn is not only a cold outbound channel. Timely touchpoints like content engagement and relevant resources can accelerate deals already in pipeline. This metric tells you whether LinkedIn is just top-of-funnel activity or a full-cycle channel. If LinkedIn touches correlate with faster deal cycles, the tool is adding value beyond lead generation. <strong>Directional range:<\/strong><\/p>\n<ul>\n<li>Compare average cycle length for deals with vs. without LinkedIn touches. Any reliable acceleration is meaningful; quantify the delta over a full cycle.<\/li>\n<li>Any measurable acceleration can justify continued investment.<\/li>\n<li>No impact often means LinkedIn activity is disconnected from active deals.<\/li>\n<\/ul>\n<h3>What a weak number usually signals<\/h3>\n<ol>\n<li><strong>LinkedIn activity is siloed from pipeline deals:<\/strong> If automation only targets net-new prospects, you miss nurture opportunities.<\/li>\n<li><strong>Nurture is limited to cold sequences:<\/strong> Most teams automate connection requests and follow-ups. Fewer teams run mid-funnel touches like profile visits, post engagement, or relevant content sharing as part of a deal plan.<\/li>\n<li><strong>No visibility into which touchpoints correlate with stage movement:<\/strong> Without logging activity against contacts and deals, you can&#8217;t see what helps deals move.<\/li>\n<\/ol>\n<h3>What to look for in a tool<\/h3>\n<ul>\n<li><strong>Nurture actions beyond messaging:<\/strong> If you use LinkedIn for warming, plan deliberate touches (e.g., profile visits, post engagement, content sharing) within LinkedIn&#8217;s policies and at human-like pacing.\u00a0Use PhantomBuster&#8217;s pacing controls to avoid spikes.<\/li>\n<li><strong>Targeting that includes existing CRM contacts:<\/strong>\u00a0Include existing CRM contacts where you have a lawful basis and opt-out controls. Use PhantomBuster exports to build targeted, compliant touch lists. You should be able to run LinkedIn touches against people already in your pipeline, not only new leads.<\/li>\n<li><strong>Reporting tied to deal stages:<\/strong> The goal is to connect touches to stage movement, not to count actions.<\/li>\n<\/ul>\n<h2>Benchmark 7: Account health and restriction rate<\/h2>\n<h3>Definition and formula<\/h3>\n<p>Account health and restriction rate is the frequency of session friction, unusual activity warnings, temporary restrictions, or identity verification prompts during prospecting. <strong>Formula:<\/strong> (Restriction events \u00f7 Active prospecting weeks), tracked as a trailing indicator.<\/p>\n<h3>Why it matters for revenue leaders<\/h3>\n<p>A tool that increases volume but increases restriction risk is a liability. Account health is a system constraint, so it belongs in tool selection. Enforcement often escalates in levels. Early signals can look like session friction, forced re-authentication, repeated cookie expiry, or unexpected disconnections. As PhantomBuster product expert <a href=\"https:\/\/www.linkedin.com\/in\/brianejmoran\/\" target=\"_blank\" rel=\"noopener\">Brian Moran<\/a> notes, session friction is often an early warning, not an automatic ban. Treat these as feedback that your pacing or pattern needs adjustment. <strong>Directional range:<\/strong><\/p>\n<ul>\n<li>Aim for zero restrictions during a 4-week pilot; any warning should trigger a pacing review.<\/li>\n<li>Any warning or friction event should trigger a pacing review.<\/li>\n<li>Repeated restrictions usually mean the workflow is not sustainable.<\/li>\n<\/ul>\n<h3>What a weak number usually signals<\/h3>\n<ol>\n<li><strong>Volume spikes or inconsistent activity patterns:<\/strong> Abrupt increases in daily activity, especially after quiet periods, trigger friction more reliably than absolute volume.<\/li>\n<li><strong>A tool that lacks pacing controls or defaults to aggressive throughput:<\/strong> If the default settings push limits or don&#8217;t support gradual ramp-up, you&#8217;re forced to manage risk manually.<\/li>\n<li><strong>No execution visibility:<\/strong> If you can&#8217;t see what ran, when it ran, and why a session disconnected, you can&#8217;t manage risk early.<\/li>\n<\/ol>\n<h3>What to look for in a tool<\/h3>\n<ul>\n<li><strong>Scheduling that spreads actions across working hours:<\/strong> You want timing dispersion and fewer clusters of actions in short windows.<\/li>\n<li><strong>Hard daily and weekly caps:<\/strong>\u00a0Set hard daily\/weekly caps. In PhantomBuster, configure caps per automation so patterns stay within your baseline. Caps should prevent accidental overrun, not just suggest safer ranges.<\/li>\n<li><strong>Session status and execution logs:<\/strong> You need early warning signals and enough detail to adjust pacing before friction escalates.<\/li>\n<\/ul>\n<p>PhantomBuster&#8217;s cloud execution and configurable pacing can help you run consistent patterns during a pilot. The value is not &#8220;safety by default,&#8221; it&#8217;s giving you controls and visibility so you can operate within your account&#8217;s baseline.<\/p>\n<h2>Decision rubric: How to map benchmarks to tool selection<\/h2>\n<table style=\"min-width: 75px;\">\n<colgroup>\n<col style=\"min-width: 25px;\" \/>\n<col style=\"min-width: 25px;\" \/>\n<col style=\"min-width: 25px;\" \/><\/colgroup>\n<tbody>\n<tr>\n<td colspan=\"1\" rowspan=\"1\"><strong>Benchmark<\/strong><\/td>\n<td colspan=\"1\" rowspan=\"1\"><strong>What it reveals<\/strong><\/td>\n<td colspan=\"1\" rowspan=\"1\"><strong>Tool capability to prioritize<\/strong><\/td>\n<\/tr>\n<tr>\n<td colspan=\"1\" rowspan=\"1\">Acceptance rate<\/td>\n<td colspan=\"1\" rowspan=\"1\">Targeting and credibility fit<\/td>\n<td colspan=\"1\" rowspan=\"1\">Pacing controls, invite queue hygiene, list source tracking<\/td>\n<\/tr>\n<tr>\n<td colspan=\"1\" rowspan=\"1\">Reply rate<\/td>\n<td colspan=\"1\" rowspan=\"1\">Messaging relevance and sequence design<\/td>\n<td colspan=\"1\" rowspan=\"1\">Multi-step sequencing, personalization fields, stop-on-reply logic<\/td>\n<\/tr>\n<tr>\n<td colspan=\"1\" rowspan=\"1\">Positive reply rate<\/td>\n<td colspan=\"1\" rowspan=\"1\">Message-audience fit and qualification<\/td>\n<td colspan=\"1\" rowspan=\"1\">Reply categorization, sequence pause controls, intent tagging<\/td>\n<\/tr>\n<tr>\n<td colspan=\"1\" rowspan=\"1\">Meeting booked rate<\/td>\n<td colspan=\"1\" rowspan=\"1\">Handoff efficiency and attribution<\/td>\n<td colspan=\"1\" rowspan=\"1\">CRM sync, pipeline mapping, exportable activity logs<\/td>\n<\/tr>\n<tr>\n<td colspan=\"1\" rowspan=\"1\">Cost per meeting<\/td>\n<td colspan=\"1\" rowspan=\"1\">Efficiency and ROI justification<\/td>\n<td colspan=\"1\" rowspan=\"1\">Cloud execution, workflow automation, forecastable pricing model<\/td>\n<\/tr>\n<tr>\n<td colspan=\"1\" rowspan=\"1\">Sales cycle contribution<\/td>\n<td colspan=\"1\" rowspan=\"1\">Nurture impact on deal velocity<\/td>\n<td colspan=\"1\" rowspan=\"1\">Engagement actions, targeting existing CRM contacts, stage-linked reporting<\/td>\n<\/tr>\n<tr>\n<td colspan=\"1\" rowspan=\"1\">Account health<\/td>\n<td colspan=\"1\" rowspan=\"1\">Risk management and sustainability<\/td>\n<td colspan=\"1\" rowspan=\"1\">Scheduling controls, caps, session monitoring and logs<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Run a 2 to 4 week controlled pilot and measure these 7 benchmarks before you scale. If acceptance and reply rates don&#8217;t improve versus your baseline, the constraint is targeting or messaging, not the tool. If friction or restriction events appear, reduce volume and adjust pacing before you continue.<\/p>\n<h2>Conclusion<\/h2>\n<p>Tool selection is a system decision, not a feature comparison. These 7 benchmarks\u2014acceptance rate, reply rate, positive reply rate, meeting booked rate, cost per meeting, sales cycle contribution, and account health\u2014give you a measurement framework that connects activity to pipeline and surfaces problems before automation scales them. Automation should improve conversion efficiency and support consistent, human-pattern outreach. If a tool can&#8217;t help you measure and improve these benchmarks, it isn&#8217;t suitable for your team. Before you sign a contract, run a controlled pilot. Measure your baseline, track these 7 benchmarks, and decide whether the tool improves outcomes without adding account risk. If you want to run this pilot with PhantomBuster, you can <a href=\"https:\/\/phantombuster.com\/signup\" target=\"_blank\" rel=\"noopener\">start a trial<\/a> to set pacing, export activity, and compare results against your control group.<\/p>\n<h2>Frequently asked questions<\/h2>\n<h3>Which LinkedIn prospecting benchmarks indicate real pipeline efficiency, not vanity activity, when evaluating automation tools?<\/h3>\n<p>Pipeline efficiency shows up in conversion, not action volume. Track acceptance rate, reply rate, positive reply rate, and meeting booked rate. Then connect outcomes to cost per qualified meeting and sales cycle contribution. If a tool increases activity but doesn&#8217;t improve downstream conversion, it&#8217;s amplifying a weak system.<\/p>\n<h3>How should revenue teams normalize LinkedIn prospecting benchmarks across reps, segments, and campaign types before comparing tools?<\/h3>\n<p>Normalize by keeping inputs consistent and comparing lift versus each rep&#8217;s baseline. Segment results by ICP slice, list source, and message variant, then compare deltas from the same starting point. Cross-rep comparisons only work when you control for account history and workflow inputs.<\/p>\n<h3>What does a weak connection acceptance rate usually diagnose?<\/h3>\n<p>A weak acceptance rate usually points to targeting and credibility before it points to tooling. Broad lists, weak relevance cues, and profiles that don&#8217;t signal &#8220;why connect&#8221; depress acceptance. Treat acceptance as a gating metric: tighten ICP filters, tighten positioning, and test invite copy before you scale any workflow.<\/p>\n<h3>If reply rate is low but acceptance rate is fine, what should we change first?<\/h3>\n<p>Low reply rate with solid acceptance usually means messaging and sequencing need work. Common fixes include removing early pitching, adding context-specific relevance, and running a follow-up sequence that stops on reply. Measure reply rate by step and variant so you can keep what works and cut what doesn&#8217;t.<\/p>\n<h3>How do we distinguish &#8220;LinkedIn throttling&#8221; from a tool failure or a real restriction during a pilot?<\/h3>\n<p>Use a simple CAP\u00a0(commercial caps), BLOCK (pattern-based enforcement), FAIL (execution issue) triage instead of assuming &#8220;throttling.&#8221; CAP means commercial caps, like Sales Navigator limits. BLOCK means pattern-based enforcement: warnings, restrictions, or verification prompts. FAIL means execution issues, like UI changes or session problems. Run a manual parity test: if manual works but automation doesn&#8217;t, investigate FAIL first.<\/p>\n<h3>How should account health and restriction risk be incorporated into automation tool selection?<\/h3>\n<p>Account health should be a first-class benchmark. Track restriction events per active prospecting week and treat session friction, forced re-authentication, repeated cookie expiry, and disconnections as early warning signals. Prefer tools that support consistent pacing and detailed logs because enforcement is pattern-based, not counter-based.<\/p>\n<h3>What workflow controls matter most to avoid &#8220;slide and spike&#8221; behavior when scaling LinkedIn outreach?<\/h3>\n<p>Prioritize pacing, scheduling, and staged rollout. Spread actions across working hours, keep day-to-day changes gradual, and roll out in stages: first export and vet lists, then send connection requests, then message new connections. The risk is not &#8220;automation,&#8221; it&#8217;s an abrupt pattern change relative to the account&#8217;s baseline.<\/p>\n<h3>What CRM visibility and audit trail do we need to trust pilot results and justify the purchase internally?<\/h3>\n<p>You need contact-level attribution from LinkedIn touchpoint to booked meeting, plus an exportable activity log. At minimum, store list source, message variant, timestamps, replies, and meeting outcomes in your CRM or a controlled dataset. Without an audit trail, you can&#8217;t prove lift, diagnose bottlenecks, or govern rep behavior.<\/p>\n<h3>How long should a controlled pilot run before we scale the workflow, fix the process, or reject the tool?<\/h3>\n<p>Run the pilot long enough to capture a full invite, acceptance, and follow-up cycle, then decide on lift and account health. Short tests overfit to randomness. Scale only when downstream metrics improve versus baseline without added friction. If they don&#8217;t, iterate targeting and messaging before increasing volume.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Track the 7 LinkedIn prospecting benchmarks that matter\u2014acceptance, reply, meetings, cost and account health\u2014to pick the right automation tool via a pilot.&#8221;<\/p>\n","protected":false},"author":11,"featured_media":10564,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[55],"tags":[34,35,38],"class_list":["post-9935","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-linkedin-automation","tag-automation","tag-generate-leads","tag-guides"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.8 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>7 LinkedIn Prospecting Benchmarks Revenue Teams Should Track Before Choosing an Automation Tool<\/title>\n<meta name=\"description\" content=\"Track the 7 LinkedIn prospecting benchmarks that matter\u2014acceptance, reply, meetings, cost and account health\u2014to pick the right automation tool via a pilot.\u201d\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"7 LinkedIn Prospecting Benchmarks Revenue Teams Should Track Before Choosing an Automation Tool\" \/>\n<meta property=\"og:description\" content=\"Track the 7 LinkedIn prospecting benchmarks that matter\u2014acceptance, reply, meetings, cost and account health\u2014to pick the right automation tool via a pilot.\u201d\" \/>\n<meta property=\"og:url\" content=\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/\" \/>\n<meta property=\"og:site_name\" content=\"PhantomBuster Blog\" \/>\n<meta property=\"article:published_time\" content=\"2026-05-07T08:03:57+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-05-07T09:30:23+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/phantombuster.com\/blog\/wp-content\/uploads\/2026\/05\/7-LinkedIn-Prospecting-Benchmarks-Revenue-Teams-Should-Track-Before-Choosing-an-Automation-Tool.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1536\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Julia Estrella\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Julia Estrella\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"16 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":[\"Article\",\"BlogPosting\"],\"@id\":\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/\"},\"author\":{\"name\":\"Julia Estrella\",\"@id\":\"https:\/\/blogv2.phantombuster.com\/blog\/#\/schema\/person\/0149648db8c80031f255d28011c506f3\"},\"headline\":\"7 LinkedIn Prospecting Benchmarks Revenue Teams Should Track Before Choosing an Automation Tool\",\"datePublished\":\"2026-05-07T08:03:57+00:00\",\"dateModified\":\"2026-05-07T09:30:23+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/\"},\"wordCount\":3621,\"image\":{\"@id\":\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/phantombuster.com\/blog\/wp-content\/uploads\/2026\/05\/7-LinkedIn-Prospecting-Benchmarks-Revenue-Teams-Should-Track-Before-Choosing-an-Automation-Tool.webp\",\"keywords\":[\"automation\",\"generate-leads\",\"guides\"],\"articleSection\":[\"LinkedIn Automation\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/\",\"url\":\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/\",\"name\":\"7 LinkedIn Prospecting Benchmarks Revenue Teams Should Track Before Choosing an Automation Tool\",\"isPartOf\":{\"@id\":\"https:\/\/blogv2.phantombuster.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/phantombuster.com\/blog\/wp-content\/uploads\/2026\/05\/7-LinkedIn-Prospecting-Benchmarks-Revenue-Teams-Should-Track-Before-Choosing-an-Automation-Tool.webp\",\"datePublished\":\"2026-05-07T08:03:57+00:00\",\"dateModified\":\"2026-05-07T09:30:23+00:00\",\"author\":{\"@id\":\"https:\/\/blogv2.phantombuster.com\/blog\/#\/schema\/person\/0149648db8c80031f255d28011c506f3\"},\"description\":\"Track the 7 LinkedIn prospecting benchmarks that matter\u2014acceptance, reply, meetings, cost and account health\u2014to pick the right automation tool via a pilot.\u201d\",\"breadcrumb\":{\"@id\":\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#primaryimage\",\"url\":\"https:\/\/phantombuster.com\/blog\/wp-content\/uploads\/2026\/05\/7-LinkedIn-Prospecting-Benchmarks-Revenue-Teams-Should-Track-Before-Choosing-an-Automation-Tool.webp\",\"contentUrl\":\"https:\/\/phantombuster.com\/blog\/wp-content\/uploads\/2026\/05\/7-LinkedIn-Prospecting-Benchmarks-Revenue-Teams-Should-Track-Before-Choosing-an-Automation-Tool.webp\",\"width\":1536,\"height\":1024,\"caption\":\"A graph displaying LinkedIn prospecting benchmarks for revenue teams to evaluate automation tools\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Blog\",\"item\":\"https:\/\/blogv2.phantombuster.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"LinkedIn Automation\",\"item\":\"https:\/\/blogv2.phantombuster.com\/blog\/category\/linkedin-automation\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"7 LinkedIn Prospecting Benchmarks Revenue Teams Should Track Before Choosing an Automation Tool\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blogv2.phantombuster.com\/blog\/#website\",\"url\":\"https:\/\/blogv2.phantombuster.com\/blog\/\",\"name\":\"PhantomBuster Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blogv2.phantombuster.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/blogv2.phantombuster.com\/blog\/#\/schema\/person\/0149648db8c80031f255d28011c506f3\",\"name\":\"Julia Estrella\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blogv2.phantombuster.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/8dcbbffe9d8be201813e442dd111fd81339570cdb322e92b013bd46bd0b92dfc?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/8dcbbffe9d8be201813e442dd111fd81339570cdb322e92b013bd46bd0b92dfc?s=96&d=mm&r=g\",\"caption\":\"Julia Estrella\"},\"url\":\"https:\/\/phantombuster.com\/blog\/author\/julia-estrella\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"7 LinkedIn Prospecting Benchmarks Revenue Teams Should Track Before Choosing an Automation Tool","description":"Track the 7 LinkedIn prospecting benchmarks that matter\u2014acceptance, reply, meetings, cost and account health\u2014to pick the right automation tool via a pilot.\u201d","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/","og_locale":"en_US","og_type":"article","og_title":"7 LinkedIn Prospecting Benchmarks Revenue Teams Should Track Before Choosing an Automation Tool","og_description":"Track the 7 LinkedIn prospecting benchmarks that matter\u2014acceptance, reply, meetings, cost and account health\u2014to pick the right automation tool via a pilot.\u201d","og_url":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/","og_site_name":"PhantomBuster Blog","article_published_time":"2026-05-07T08:03:57+00:00","article_modified_time":"2026-05-07T09:30:23+00:00","og_image":[{"width":1536,"height":1024,"url":"https:\/\/phantombuster.com\/blog\/wp-content\/uploads\/2026\/05\/7-LinkedIn-Prospecting-Benchmarks-Revenue-Teams-Should-Track-Before-Choosing-an-Automation-Tool.webp","type":"image\/webp"}],"author":"Julia Estrella","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Julia Estrella","Est. reading time":"16 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":["Article","BlogPosting"],"@id":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#article","isPartOf":{"@id":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/"},"author":{"name":"Julia Estrella","@id":"https:\/\/blogv2.phantombuster.com\/blog\/#\/schema\/person\/0149648db8c80031f255d28011c506f3"},"headline":"7 LinkedIn Prospecting Benchmarks Revenue Teams Should Track Before Choosing an Automation Tool","datePublished":"2026-05-07T08:03:57+00:00","dateModified":"2026-05-07T09:30:23+00:00","mainEntityOfPage":{"@id":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/"},"wordCount":3621,"image":{"@id":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#primaryimage"},"thumbnailUrl":"https:\/\/phantombuster.com\/blog\/wp-content\/uploads\/2026\/05\/7-LinkedIn-Prospecting-Benchmarks-Revenue-Teams-Should-Track-Before-Choosing-an-Automation-Tool.webp","keywords":["automation","generate-leads","guides"],"articleSection":["LinkedIn Automation"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/","url":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/","name":"7 LinkedIn Prospecting Benchmarks Revenue Teams Should Track Before Choosing an Automation Tool","isPartOf":{"@id":"https:\/\/blogv2.phantombuster.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#primaryimage"},"image":{"@id":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#primaryimage"},"thumbnailUrl":"https:\/\/phantombuster.com\/blog\/wp-content\/uploads\/2026\/05\/7-LinkedIn-Prospecting-Benchmarks-Revenue-Teams-Should-Track-Before-Choosing-an-Automation-Tool.webp","datePublished":"2026-05-07T08:03:57+00:00","dateModified":"2026-05-07T09:30:23+00:00","author":{"@id":"https:\/\/blogv2.phantombuster.com\/blog\/#\/schema\/person\/0149648db8c80031f255d28011c506f3"},"description":"Track the 7 LinkedIn prospecting benchmarks that matter\u2014acceptance, reply, meetings, cost and account health\u2014to pick the right automation tool via a pilot.\u201d","breadcrumb":{"@id":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#primaryimage","url":"https:\/\/phantombuster.com\/blog\/wp-content\/uploads\/2026\/05\/7-LinkedIn-Prospecting-Benchmarks-Revenue-Teams-Should-Track-Before-Choosing-an-Automation-Tool.webp","contentUrl":"https:\/\/phantombuster.com\/blog\/wp-content\/uploads\/2026\/05\/7-LinkedIn-Prospecting-Benchmarks-Revenue-Teams-Should-Track-Before-Choosing-an-Automation-Tool.webp","width":1536,"height":1024,"caption":"A graph displaying LinkedIn prospecting benchmarks for revenue teams to evaluate automation tools"},{"@type":"BreadcrumbList","@id":"https:\/\/phantombuster.com\/blog\/linkedin-automation\/linkedin-prospecting-benchmarks\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Blog","item":"https:\/\/blogv2.phantombuster.com\/blog\/"},{"@type":"ListItem","position":2,"name":"LinkedIn Automation","item":"https:\/\/blogv2.phantombuster.com\/blog\/category\/linkedin-automation\/"},{"@type":"ListItem","position":3,"name":"7 LinkedIn Prospecting Benchmarks Revenue Teams Should Track Before Choosing an Automation Tool"}]},{"@type":"WebSite","@id":"https:\/\/blogv2.phantombuster.com\/blog\/#website","url":"https:\/\/blogv2.phantombuster.com\/blog\/","name":"PhantomBuster Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blogv2.phantombuster.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/blogv2.phantombuster.com\/blog\/#\/schema\/person\/0149648db8c80031f255d28011c506f3","name":"Julia Estrella","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blogv2.phantombuster.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/8dcbbffe9d8be201813e442dd111fd81339570cdb322e92b013bd46bd0b92dfc?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/8dcbbffe9d8be201813e442dd111fd81339570cdb322e92b013bd46bd0b92dfc?s=96&d=mm&r=g","caption":"Julia Estrella"},"url":"https:\/\/phantombuster.com\/blog\/author\/julia-estrella\/"}]}},"_links":{"self":[{"href":"https:\/\/phantombuster.com\/blog\/wp-json\/wp\/v2\/posts\/9935","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/phantombuster.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/phantombuster.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/phantombuster.com\/blog\/wp-json\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/phantombuster.com\/blog\/wp-json\/wp\/v2\/comments?post=9935"}],"version-history":[{"count":15,"href":"https:\/\/phantombuster.com\/blog\/wp-json\/wp\/v2\/posts\/9935\/revisions"}],"predecessor-version":[{"id":10604,"href":"https:\/\/phantombuster.com\/blog\/wp-json\/wp\/v2\/posts\/9935\/revisions\/10604"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/phantombuster.com\/blog\/wp-json\/wp\/v2\/media\/10564"}],"wp:attachment":[{"href":"https:\/\/phantombuster.com\/blog\/wp-json\/wp\/v2\/media?parent=9935"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/phantombuster.com\/blog\/wp-json\/wp\/v2\/categories?post=9935"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/phantombuster.com\/blog\/wp-json\/wp\/v2\/tags?post=9935"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}